ExploreRobotHead.TTSTest Class
You can type in text and synthesize the text with a speech engine while simultaneously sending visual (viseme) information over a serial cable to a robot head for the purposes of lip synchronozation. It needs to be rewritten for a modular application. Note that you require the interop library for SAPI 5, as well as a SAPI 5 compliant speech engine to make this work. Obviously you also need the .NET framework redistributable. Currently it opens a port to a tweaked version of the OpenCV HaarFaceDetect, so it requires that and a compatible camera. It also requires a compatible robot head -- most of the parameters for servo motion have been hardcoded, which needs to be changed in the future. Right now I'm just having a look around :) Current complied version uses framework 1.1

Access: Public
Base Classes: Object
  Members Description  
    isAnticipating    
    voice    
    SpFlags    
    serialPort    
    statusOutputDelegate    
    animationQueue    
    animationToBookmarkHash    
    animationHash    
    generalAnimationList    
    extremelyPositiveReactionAnimationList    
    mildlyPositiveReactionAnimationList    
    extremelyNegativeReactionAnimationList    
    mildlyNegativeReactionAnimationList    
    neutralReactionAnimationList    
    anticipatoryAnimationList    
    servoMinMaxDefaultList    
    animationPlaying    
    isSpeaking    
    currentSequence    
    random    
    animationToBookmarkIndex    
    animationMarker    
    gazeAnimationTemplateCount    
    audioMixerHelper    
    leftEyeAxis    
    rightEyeAxis    
    headTurnAxis    
    upperNod    
    lowerNod    
    microphoneDevice    
    lastX    
    lastY    
    TTSTest The main entry point for the application. Start a message loop to receive events from the SAPI 5 speech engine. Set up the serial port so commands may be set to a Scott Edwards Electronics Mini SSC Serial Servo Controller

 
    BlinkThread Blink at least a certain time span apart, plus a random amount

 
    Dispose    
    OpenComPort    
    InitializeServos Values for silence as determined empirically by manipulating the robot servos.

Values for silence as determined empirically by manipulating the robot servos.

 
    WriteToSerialSafely Do error checking here so we don't send a servo value out that will break the robot. These threshold values are experimentally determined using VSA. This should be migrated to a config file

 
    AnimationTimer_Tick When the animation timer "ticks" it throws an event which is caught here. Every event indicates that the animation clock has moved forward one timestep (i.e. one animation frame, or slice, or whatever). Accordingly we get the next arraylist of slices from the animation queue (here an arraylist) and write out the appropriate values to the serial port. When done we remove the just played elements from the beginning of the list.

 
    DifferenceFromDefault WARNING: Assumes no gaps in servo map.

 
    voice_Bookmark Here we catch the bookmark events thrown by the speech engine when it encounters a bookmark tag in the speech stream. We interpret the argument of the tag as the name of an animation stored in our animation hash file, using that name as the key to retrieve the animation data (which is an arraylist of byte arrays acting as animation time "slices")

 
    voice_Viseme Use visemes thrown by the speech engine as the basis for commands to the robot head. First create a map between the visemes and cardinal, device independent mouth positions. Then map the robot min/max servo positions to the cardinal positions using linear regression (computed off line). Use this information to scale the cardinal positions to servo commands. Send servo commands.

 
    voice_EndStream    
    voice_StartStream    
    MessageLoop A message loop so we can receive events from the speech engine. If this were a windowed app, this loop would be automatic. As a console app, we make our own loop.

 
    Speak    
    Stop    
    Animate Given the name of an animation, insert a bookmark. When the bookmark is raised, the animation will be scheduled

 
    AnimateLater    
    AnimateNow Caution -- only use this if you are SURE you want two animations to play at

 
    ScheduleSliceNow We have an animation time slice that we want to be played on the scheduler's next turn

 
    LoadAnimations The animations are all loaded in a folder called VSA in the executable's local directory Each file in that folder is assumed to be an animation file of the appropriate VSA comma+space delimited format. To play these animations on the robot, insert a bookmark into the speech stream, e.g. "Hi \!bmAnimationName there" where AnimationName is the name of the animation file without the file extension, and \!bm is the tag the speech engine recognizes as signalling a bookmark (may vary depending on your speech engine)

 
    CreateBookmark    
    RandomAnticipatoryAnimation    
    RandomExtremelyPositiveReactionAnimation    
    RandomMildlyPositiveReactionAnimation    
    RandomExtremelyNegativeReactionAnimation    
    RandomMildlyNegativeReactionAnimation    
    RandomNeutralReactionAnimation    
    SetVoice    
    GetAvailableVoices    
    DeserializeObject    
    Gazer    
    voice_Word    
    ScheduleGazeAnimationTemplate