voice control of the Squeak IDE

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

voice control of the Squeak IDE

Craig Latta

Hi all--

     As part of the Caffeine VR system, I'm working on voice
recognition. When wearing a VR headset without AR cameras, it's useful
to control the system by voice, without having to look at a physical
keyboard. I'm using the Web Speech API[1] on Chrome, which has a good
recognizer that requires no training. I've adapted the natural language
state machine framework from Quoth[2] to work with the A-Frame[3] VR
support I've written for Caffeine[4].

     So far, I've implemented phrases for moving and rotating the
camera, like...

-    "Move forward 10."
-    "Move up 15."
-    "Rotate left 45."
-    "Go home."

     ...and phrases for controlling the Squeak IDE, like...

-    "Open a browser."
-    "Browse implementors of at colon put colon."
-    "Close window."
-    "Find a class."
-    "New method."
-    "Start typing."
-    "Accept."
-    "Evaluate three plus four."

     What would you expect a speech-capable Squeak IDE to understand?


     thanks,

-C

[1] https://developer.mozilla.org/en-US/docs/Web/API/Web_Speech_API
[2] http://netjam.org/quoth
[3] https://aframe.io
[4] https://caffeine.js.org/a-frame

--
Craig Latta
Black Page Digital
Amsterdam :: San Francisco
[hidden email]
+31   6 2757 7177 (SMS ok)
+ 1 415  287 3547 (no SMS)


Reply | Threaded
Open this post in threaded view
|

Re: voice control of the Squeak IDE

David T. Lewis
On Thu, Jul 05, 2018 at 10:54:22AM +0200, Craig Latta wrote:

>
> Hi all--
>
>      As part of the Caffeine VR system, I'm working on voice
> recognition. When wearing a VR headset without AR cameras, it's useful
> to control the system by voice, without having to look at a physical
> keyboard. I'm using the Web Speech API[1] on Chrome, which has a good
> recognizer that requires no training. I've adapted the natural language
> state machine framework from Quoth[2] to work with the A-Frame[3] VR
> support I've written for Caffeine[4].
>
>      So far, I've implemented phrases for moving and rotating the
> camera, like...
>
> -    "Move forward 10."
> -    "Move up 15."
> -    "Rotate left 45."
> -    "Go home."
>
>      ...and phrases for controlling the Squeak IDE, like...
>
> -    "Open a browser."
> -    "Browse implementors of at colon put colon."
> -    "Close window."
> -    "Find a class."
> -    "New method."
> -    "Start typing."
> -    "Accept."
> -    "Evaluate three plus four."
>
>      What would you expect a speech-capable Squeak IDE to understand?
>
>
>      thanks,
>
> -C

I have so little experience here that I quite honestly don't know what
to expect. But one idea that comes to mind:

I would like to tell an object to start listening to me, such that I
could start asking it to do things. And I'd like that object to be able
to have its own vocabulary so that it might respond to things that it
understands, and ignore other things.

For example, if the object was the little car that runs around in Etoys,
I would like to be able to tell the car to start listening to me, so
that I could ask the car to go faster or turn left. Meanwhile, I might
want to be able to separately speak to the current SmalltalkImage and
ask it to save and quit the image.

Dave



>
> [1] https://developer.mozilla.org/en-US/docs/Web/API/Web_Speech_API
> [2] http://netjam.org/quoth
> [3] https://aframe.io
> [4] https://caffeine.js.org/a-frame
>
> --
> Craig Latta
> Black Page Digital
> Amsterdam :: San Francisco
> [hidden email]
> +31   6 2757 7177 (SMS ok)
> + 1 415  287 3547 (no SMS)
>
>

Reply | Threaded
Open this post in threaded view
|

Re: voice control of the Squeak IDE

Edgar J. De Cleene-3
In reply to this post by Craig Latta
Amazing as always


On 05/07/2018, 05:54, "Craig Latta" <[hidden email]> wrote:

>
Hi all--

     As part of the Caffeine VR system, I'm working on
> voice
recognition. When wearing a VR headset without AR cameras, it's
> useful
to control the system by voice, without having to look at a
> physical
keyboard. I'm using the Web Speech API[1] on Chrome, which has a
> good
recognizer that requires no training. I've adapted the natural
> language
state machine framework from Quoth[2] to work with the A-Frame[3]
> VR
support I've written for Caffeine[4].

     So far, I've implemented
> phrases for moving and rotating the
camera, like...

-    "Move forward 10."
-
> "Move up 15."
-    "Rotate left 45."
-    "Go home."

     ...and phrases for
> controlling the Squeak IDE, like...

-    "Open a browser."
-    "Browse
> implementors of at colon put colon."
-    "Close window."
-    "Find a
> class."
-    "New method."
-    "Start typing."
-    "Accept."
-    "Evaluate
> three plus four."

     What would you expect a speech-capable Squeak IDE to
> understand?


     thanks,

-C