Any object can be set to trigger action such as activating sounds, vibration, or screen-flashes, opening websites and apps, or even interacting with external devices.
For individuals: Dot Go helps visually impaired people navigate the world, find objects, and automate daily tasks to lead an independent life.
For businesses and organizations: Custom trained computer vision models can make retail-stores and public spaces more accessible for the visually impaired. With the platform approach, Dot Go lowers the cost and effort associated with developing individual apps.
With dedicated computer vision models, users can create automations such that
- A bus-station sign finds the fastest connection home on the public transportation app.
- A monument opens the camera app to take a picture.
- A bottle of milk opens the Reminders app to cross itself off the shopping list.
- A painting in the museum leads the user to an article on Wikipedia.
- A pair of shoes on display at a store takes the user to the online product page with further information about the features from the color to the price.
Features:
- LiDAR sensors (available from iPhone 12 onwards) measure distance to objects.
- Open-source computer vision models detect objects in the environments; the baseline model is YOLO trained on the COCO dataset.
- Pocket-mode enables handsfree use and is further made possible by wearables like t-shirts and lanyards.
- Automation of actions triggered by objects requires zero coding-knowledge.
- Custom presets can be created and shared by the community.
- Users can discover and download curated presets built by businesses on the Featured tab.
If you’d like to collaborate or simply ask a question, write to us on [email protected].
Dot Go is one of many award-winning tech created by Dot Incorporation in South Korea. A few others are: Dot Watch, a braille smartwatch; Dot Translate an AI braille translation engine; Dot Mini, a braille translator for digital text with many more to come.