Replies: 3 comments 5 replies
-
Hi, glad to hear! That field points to a Willow Inference Server instance. We provide a default/demo Willow Inference Server (releasing this week) on a best effort basis. Do note for the kind of performance we're providing we do more-or-less require CUDA. However, the performance is excellent and supports cards starting from the GTX 1060 6GB (GTX 1070 appears to be the current "sweet spot" for price/performance on the used market). We will be including pointers and benchmarks for very affordable and available GPUs, sample hardware configurations, etc. With the low cost of a Willow Device you can get a used GPU ($100) and Willow ($50) for the roughly same cost as one Raspberry Pi with microphone, speaker, SD card, enclosure, etc. Of course if you have multiple Willows the cost savings vs Raspberry Pi gets even better as a GPU can be shared for many, many Willow devices. |
Beta Was this translation helpful? Give feedback.
-
I must also add, what you've accomplished in under 2 months is absolutely astounding!!! I can't wait to see where this project goes. I'd love to test just about anything you release on this. If you need a guinea pig for the esp-32-box-lite, keep me on speed-dial!!! |
Beta Was this translation helpful? Give feedback.
-
WIS released - discuss on HN |
Beta Was this translation helpful? Give feedback.
-
Hi! Just got my box setup and been playing around with it and it works quite nicely!
Just had a quick clarification question:
I noticed the https://infer.tovera.io/api/willow url when in the configuration and the readme specifies you can change it to your own inference server; is this something I point to Home Assistant or is the idea to wait for the Willow Inference server release itself and I then host that locally, and then point to my local inference server?
Beta Was this translation helpful? Give feedback.
All reactions