-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Non-linear mouth mapping? #10
Comments
I added a call to a non-linear scaling function (firmware v1.2). This scaling function uses the sqrt() function from the Math library - mostly for convenience. It does appear to add realism to the moth movement. However, there still seems to be some tendency for the mouth to not pick up the last syllable all of the time. I don't know why this is -- it is not simply a matter of voice volume dropoff. Perhaps the mini-MP3 BUSY line sometimes drops a little too soon? Perhaps the last bit is on the wrong track of the MP3 file (not the one playing through the analog processor), albeit I think that the recording is monaural. |
I added nlp2 with this algorithm and it seems to look good. Unsure why it does the job better than nlp1.
|
I think we can close this issue now. |
Agreed.
…-----Original Message-----
From: Jim Schrempp ***@***.***>
To: BobGlicksman/Animatronics ***@***.***>
Cc: Bob Glicksman ***@***.***>; Author ***@***.***>
Sent: Fri, Nov 4, 2022 1:36 pm
Subject: Re: [BobGlicksman/Animatronics] Non-linear mouth mapping? (#10)
I think we can close this issue now.—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
As of 1/21/21: The mouth movement circuitry and test firmware appears to be working as designed. The servo-mouth definitely moves in synch with a spoken MP3 clip. There is a perceptual time lag as you average more samples, as expected. However, the mouth movement, while in synch with the audio, is not very convincing. I have noted that a "small" drop in audio sound level results in a seemingly much larger drop in mouth movement. I THINK that this may be due to the fact that human hearing is logarithmic whereas the sound (envelope) level mapping to servo movement is linear. Realism MIGHT be improved by some non-linear mapping of sound (envelope voltage) to servo movement which emphasizes low value sound samples more than high volume sound samples. Perhaps something like a 4 point linear mapping of A/D values to servo commands would work:
The text was updated successfully, but these errors were encountered: