This project is dedicated to building a "Synthetic Human" which is called Kenzy for which we have assigned the female gender pronoun of "she". She has visual face recognition (opencv/opencv), speech transcription (coqui), and speech synthesis (festival or mimic3). Kenzy is written in Python and is targeted primarily at the single board computer (SBC) platforms like the Raspberry Pi.
Visit our main site: https://kenzy.ai/
Kenzy's architecture is divided into two main components: Containers and Devices. The containers focus on communication between other containers and devices are designed to control input and output operations. The most important container is the Brain which is a special type of container as it collects data and provides the skill engine for reacting to inputs. While a Brain does support all the methods of a normal container it is recommended to create a separate container to store all your devices.
All options, configurations, and startup parameters are driven by the configuration file saved to the following location:
Python Module Overview
|kenzy.containers.Brain||Main service for processing I/O.||8080|
|kenzy.containers.DeviceContainer||Secondary service for devices.||8081|
Python Device Module Overview
|kenzy.devices.Speaker||Audio output device for text-to-speech conversion|
|kenzy.devices.Listener||Microphone device for speech-to-text conversion|
|kenzy.devices.Watcher||Video/Camera device for object recognition|
|kenzy.devices.KasaDevice||Smart plug device for Kasa devices|
|kenzy.panels.RaspiPanel||Panel device designed for Raspberry Pi 7" screen @ 1024x600|
Kenzy is available through pip, but to use the built-in devices there are a few extra libraries you may require. Please visit the Basic Install page for more details.
Web Control Panel
If everything is working properly you should be able to point your device to the web control panel running on the Brain engine to test it out. The default URL is:
Help & Support
Help and additional details is available at https://kenzy.ai