Aller au contenu


firened

Inscrit(e) (le) 12 juil. 2020
Déconnecté Dernière activité hier, 08:54
*****

#116103 Servo benchmark stress test

Posté par firened - 25 juin 2022 - 02:26

servo benchmark

goal: find robust servo models for a Minus-Type robot, in the hopes of extending time between servo failures and repairs. A per piece price of up to 20$ may be acceptable (N20 Pololu motors are also in that range). Minus-Type uses Micro Servos, also called 9g or sg90.

tested are the following servo models:

1) DIYMore SG90, plastic gear, 1$: recommended by Vigibot. are very quick which makes for a comically fast head nodding that people find hilarious. used on most of my robots (not WalterJakob's pan-tilt), personal failure rate is pretty high.

2) DIYMore MG90S, metal gear, 3$: had servos in the past that were DOA or didn't work on the lower battery voltage, unfortunately can't remember the exact reason.

3) TiankongRC TS90A, plastic gear, 270°, 3$

4) Doooman Hobby DM-S0090MD, metal gear, 270°, 6$

5) DFRobot DS-S006L, plastic gear, 9$: has a clutch to protect internal gears.

6) FiTec FS90MG, metal gear, 6$

7) Spektrum A330, plastic gear, 18$

8) Futaba S3114, plastic gear, 20$: "sub-micro" servo doesn't fit the current 3D printer files, requires small adjustment.

9) HiTec HS-55+, plastic gear, 20$

10) HiTec HS-5055MG, metal gear, 30$

11) HiTec HS-65MG, metal gear, 30$
IMG_20220514_134716.jpg
↓ See conclusions or table "benchmark table.xlsx" for servo failure times.

test tries to simulate real world usage, common scenarios:

• to work with Vbat voltage 3.3V - 4.2V

• usually low weight/momentum movements (camera is pretty lightweight)

• people may accidentally hit the robot

• robot may drive into walls or objects with a servo absorbing the hit

• servo may want to move into an object and be limited by it

stress test setup:

pre-test #0: power from 3.3V and check if servo is even functional (most servo specifications state a minimum operating voltage of 4V). stop movement with hand, to check if it stops immediately / gets stuck.

main test #1: power from 3.3V, move back and forth (90° - 0°) automatically (eg 1 cycle every 6s to keep temperature low), no weight attached. from time to time force into obstacle (180° position can't be reached) to simulate real world interaction. let run every night and check how many minutes they last before they stop working.
IMG_20220522_191724.jpg
notes:

27.5.22: ran test #1 for 2400min (actual movement: 400min), 1 cycle (to 0° and back to 90°) every 6s, no failures.

-> replaced with 10A 3.3V power supply. increased speed from 1 cycle every 6s to 1 cycle every 2s. now movement from 3 servos overlap. can still be doubled. temperature IR meter: room = 27°C, servo avg = 35°C, servo max = 37°C.

28.5.22: ran test #1 for 480min (actual movement: 240min), 1 cycle every 2s no failures.

-> increased speed to 1 cycle every 1s. movement from 6 servos overlap, maximum possible. temperature IR meter: servo avg = 42°C, servo max = 46°C

29.5.22: ran test #1 for 800min (actual movement: 800min), 1 cycle every 1s,

-> divided runtime value by intensity to "actual movement" value.
Fichier joint  VID_20220606_211516.mp4   13,93 Mo   4 téléchargement(s)
Conclusion for Botkins / Minus-Type Robot:

6 servos had good behavior and more than acceptable runtime until failure.

Metal Gears:

• DIYMore MG90S: Okay

• FiTec FS90MG: Okay

• Doooman Hobby DM-S0090MD: Okay (Pan 270°: cable fatigue!)

Plastic Gears:

• DIYMore SG90: Okay

• DFRobot DS-S006L: Okay (auto-revert would need some testing)

• TiankongRC TS90A: Okay (Pan 270°: cable fatigue!)

not recommend servos:

• Spektrum A330: No (failed early)

• Futaba S3114: No (.stl doesn't fit)

• HiTec HS-55+: No (Slow)

• HiTec HS-5055MG: No (3.3V unsupported)

• HiTec HS-65MG: No (Slow)

See table "benchmark table.xlsx" for servo failure times.

Fichier joint  benchmark table.xlsx   10,69 Ko   4 téléchargement(s)




#115084 Help - Freemove Car Kit on Vigibot

Posté par firened - 04 janvier 2022 - 11:17

oh very cool it's working.

there's a comma too much in the config:
    },
,
    {
      "NAME": "Right wheel rear",
probably why you got an invalid syntax error.
but glad it's running now!

regarding the rotation, i *think* you simply have to invert the GAINS8 on both "Right wheel" entries. remove the minus sign so it looks like this:
      "GAINS8": [
        1,
        1
      ],



#114101 How To Display Chat Text On Your Robot's Framebuffer

Posté par firened - 22 août 2021 - 06:30

it's possible to write anything to `> /dev/tty0` and have it show up on the framebuffer. can be output of your custom scripts or even animations:
`sudo apt install sl cmatrix libaa-bin`

a passing train:
`sudo -s`
`/usr/games/sl > /dev/tty0`

matrix:
`sudo -s`
`cmatrix > /dev/tty0`

or a fire:
`sudo -s`
`aafire > /dev/tty0`

or follow the instructions here for an aquarium:
https://www.tecmint....un-in-terminal/
`sudo -s`
`asciiquarium > /dev/tty0`


#114060 How To Display Chat Text On Your Robot's Framebuffer

Posté par firened - 17 août 2021 - 04:00

add a framebuffer view:
- add a CAMERA in hardware config with SOURCE: 1
- add a COMMAND in remote control config and use the created camera number

display chat text on framebuffer:
- install dependencies:
`sudo apt install lolcat cowsay figlet`
-then edit robot.json
`sudo nano /boot/robot.json`

for a cow with a speech bubble, add:
"CMDTTS": "cat /tmp/tts.txt | /usr/games/cowsay | /usr/games/lolcat -F 0.3 > /dev/tty0"
for big colorful text (thank you Pascal) :
"CMDTTS": "cat /tmp/tts.txt | /usr/bin/figlet -f small | /usr/games/lolcat > /dev/tty0"
there are lots of possibilities and adjustments possible with lolcat, figlet, cowsay, toilet or fortune and more. Add your favorite CMDTTS into the comments


#113891 Modification leds IR en leds blanches

Posté par firened - 13 juillet 2021 - 07:57

instructions in English:
1. bridge the 2 circled pins from the photo together using a small wire or wirewrap. use a magnifying glass and solder flux.
2. unsolder and bend both pins of the IR LED
3. turn module upside down, and heat the center pad on the backside to unsolder the IR LED case. use tweezer to gently pull on the IR LED case.

4. add some flux to the front center pad
5. place new LED and solder both pins of the new LED to the module
6. heat center pad on the backside to solder the new LED case to the front center pad. (max 5s)

backside looks like this:
IMG_20200617_112725.jpg

additional notes:
the white LED uses a similar current to the IR LED. i suggest turning the potentiometer and checking if it hets hot after 5min. it should only get warm.
3.3V is enough for all LED colors

by bridging the 2 pins on the photo above, it uses the 3.3V from the camera module instead of only 2.5V.

3W White LEDs like this:
https://a.aliexpress.com/_BTEedY


#112075 Vigibot Pi to multiple ESP32 WLAN Communications Example

Posté par firened - 15 décembre 2020 - 11:11

UPDATE 16.08.2021: find the updated code and guide on GitHub: https://github.com/e...-Multiple-ESP32

Note:
if you want to control a single ESP32 robot/gadget, use this guide instead:
https://www.robot-ma...ample/?p=111377
if you want to control multiple ESP32 robots/gadgets, continue.

This code allows control of multiple ESP32 robots or gadgets over WLAN using vigibot.
A raspberry is still required for the camera and needs to be on the same network. the serial data usually sent and received to an Arduino over UART ( https://www.robot-ma...cation-example/ ), is being sent over WLAN instead, allowing control of small robots or gadgets.
usage example: a pi with a camera watches an arena where multiple small ESP32 robots can be driven, the ESP32 bots do not have an on-board camera.
second usage example: open or close a door or toy controlled fron an ESP32.

1) assign your pi a static IP address
https://thepihut.com...-address-update

2) the pi needs to create a virtual serial point that's accessible over WLAN from the ESP32. install socat and ncat on your raspberry:
sudo apt install socat ncat
3) create a new script:
sudo nano /usr/local/ncat_esp.sh
4) paste the following code:
#!/bin/bash

sudo socat pty,link=/dev/ttyVigi0,rawer,shut-none pty,link=/dev/ttySrv,rawer,shut-none &

while true
do
  sleep 1
  sudo ncat -l -k -4 --idle-timeout 15 7070 </dev/ttySrv >/dev/ttySrv
  date
  echo "restarting ncat"
done
5) add permissions
sudo chmod +x /usr/local/ncat_esp.sh
6) run the script
sudo /usr/local/ncat_esp.sh
7) insert your
- network info
- raspberry host IP
- vigibot NBpositions (if different from default)
into the following sketch and upload it to your ESP32 from the arduino IDE using a usb cable.
note: only one esp can send a reply, make sure `client.write();` is only being called on one ESP32, all others only listen without sending any data back. (i recommend that one ESP32 sends data back, this way the vigibot robot won't wake up if the ESP32 isn't connected properly, which makes it easier to notice that it isn't working.)

Spoiler


8) open the serial monitor to confirm a successful WLAN and TCP client connection.

9) test the pi <-> ESP connection by logging in as root in a new shell (needs root) :
su -
and sending a test text frame:
echo '$T  hello from cli       $n' > /dev/ttyVigi0
the arduino serial monitor now says "hello from cli "

10)if it works add it to start at boot:
sudo nano /etc/rc.local
and add
/usr/local/socat-esp.sh &
above the line "exit 0".

11) add an entry on the vigibot online remote control config -> SERIALPORTS -> 3 with value: "/dev/ttyVigi0"

12) set WRITEUSERDEVICE to "3"

13) if you set up an ESP32 to send data back:
set READUSERDEVICE to "3"

restart the client and wake your robot up, if it wakes up then everything works.


related notes:
you can add a dummy client from the command line, in case you want to check if the vigibot frames are made available:   
sudo socat tcp:localhost:7070 -
   

similarly it's theoretically also possible to fetch this data from another computer or custom script/application.

there's an outage happening every few weeks or so, from which it doesn't recover.   
it looks like ncat doesn't open the socat socket if there's too much data buffered. the workaround is to manually clear the socat socket using
sudo cat /dev/ttySrv
and then cancel with `ctrl-c`

 


#111269 how to add an intro video to your robot

Posté par firened - 25 septembre 2020 - 10:59

it's possible to add a custom intro / instructions video to your robot.
be aware that this is to add an intro video to your existing robot with a real camera already in place, and not to add a «video only bot», check the #rules-documentation channel on discord for the official rules.
you can use the intro so people see your robot from a different perspective (like a mirror would), or for special instructions that may not be apparent.
also make sure that the intro video is not the default view of your robot (explained in step 5).
 
Standard setup, without video sound:

1) login via ssh and create a symlink to ffmpeg:

sudo ln -s /usr/bin/ffmpeg /usr/local/vigiclient/processdiffintro

2) create a second symlink to your video (change the Rick _Astley path) :

sudo ln -s /home/pi/Rick_Astley_Never_Gonna_Give_You_Up.mp4 /usr/local/vigiclient/intro

3) dublicate HARDWARE CONFIGURATION -> CAMERAS -> 0 , and set CAMERAS -> 1 -> SOURCE : 7
(use "TREE" or "TEXT" layout, SOURCE number is currently 7, but may change)
Screenshot_20200914_142914_com.opera.browser.jpg

4) dublicate REMOTE CONFIGURATION -> COMMANDES -> 0 , and set your created camera number in COMMANDES -> 0 -> CAMERA : 1
(use "TREE" or "TEXT" layout)

5) set REMOTE CONFIGURATION -> DEFAUTCOMMANDE : 1 , so that it doesn't always start playing when clicking the stop button. (instead requires pressing the prev/left button)
 
restart robot client, now your robot will play the video.
 
OPTIONAL: Play video sound on the robots speaker:
 
6) copy the whole "CMDDIFFUSION" array from /usr/local/vigiclient/sys.json into your /boot/robot.json file. (if Vigibot pushes an update to sys.json you will have to manually update/re-copy the array.)
 
7) in your robot.json, replace the ".../processdiffintro" array entry, with the following ffmpeg command. On the last line " plughw:1" enter your PLAYBACKDEVICE number, you can figure that out with the command `alsamixer` -> F6.

] , [
   "/usr/local/vigiclient/processdiffintro",
   " -loglevel fatal",
   " -stream_loop -1",
   " -re",
   " -i /usr/local/vigiclient/intro",
   " -c:v h264_omx",
   " -profile:v baseline",
   " -b:v BITRATE",
   " -flags:v +global_header",
   " -bsf:v dump_extra",
   " -f rawvideo",
   " -vf 'scale=640x480:force_original_aspect_ratio=decrease,pad=ih*4/3:ih:(ow-iw)/2:(oh-ih)/2'",
   " tcp://127.0.0.1:VIDEOLOCALPORT",
   " -f alsa",
   " -filter:a volume=0.25",
   " plughw:1"
  ]

robot.json should then look something like this:
Screenshot_20211022_110012.jpg
 
OPTIONAL 2: Route video sound directly into the Vigibot audio channel (without playing on the speaker):
Todo: ask Pierre

Thanks mike118 for the official vigibot logo animation that we can use in our custom intro video.
it's here as direct download:
https://www.robot-ma...igibotintro.mov
and here on youtube:




#110796 how to use the Amazon Alexa voice on your robot

Posté par firened - 08 août 2020 - 10:48

please note that the following app connects to amazon using my access keys. a decent amount of requests are included so feel free to use it for your robot or your other raspberry projects. if you wish to use your own credentials, check out the GitHub repo, which describes the node source code without any access keys:
https://github.com/e...nment/aws-polly

to use the pre-authenticated executable:
1) because i can't put my access keys public, you will have to send me a PM or discord @firened#1228 .

2) add permission
sudo chmod +x /usr/local/aws-polly-signed-signal
3) install dependency
sudo apt install mpg123
4) edit your robot.json file
sudo nano /boot/robot.json
and add
,
"CMDTTS":"pkill -x -o -SIGUSR1 -f '/usr/local/aws-polly-signed-signal -a plughw:PLAYBACKDEVICE -v Mathieu' || /usr/local/aws-polly-signed-signal -a plughw:PLAYBACKDEVICE -v Mathieu &"
it should then look like this:
Screenshot_20200808_113733.jpg

to use a different voice, replace "Mathieu" twice in your robot.json file with a different voice-id.
Listen to some voice examples:
https://www.amazonaws.cn/en/polly/
Full list of available voices: (write voice names without phonetics. Léa -> Lea)
https://docs.aws.ama.../voicelist.html

see all these TTS related posts:
Amazon Alexa TTS
picoTTS
Google TTS
Enable espeak markup language


#110519 How to enable markup language tags for the espeak TTS Voice on your robot

Posté par firened - 13 juillet 2020 - 03:10

here a few fun phrases to make the TTS sing or otherwise mess with it.
because every TTS engine works differently, i will only list phrases that work well for the espeak TTS with enabled markup language.
add your own fun phrases for this TTS in the comments!
first some phrases for espeak without markup language:
 
oinkoinkoinkoinkoinkoinkoinkoinkoinkoinkoinkoinkoinkoinkoinkoinkoinkoinkoinkoink
 
umnpkkrrrkdumnpkkrrrkkpkkrrrkkpkkrrrkkpkkrrrkkpkkrrrkkpkkrrrkkpkkrrrkkpkkrrrkkpkkdumndpkkrrrkumndumndumndumnpkkrrrkdumndumndumzznbschkrrrkkpkkrrrkkpkkrrrbschkrrrkkpkkrrrkkpkkrrrumndumnpkkrrrkdumndumnpk
 
Your Motercycle goes zolczk zolczk zolczk zolczk zolczk zolczk zolczk zolczk zolczk zolczk zolczk zolczk zolczk zolczk zooik zoooisk
 
here are more TTS phrases that you can use (the "/tts" is not necessary) :
https://techwafer.co...cord-tts-lines/
 
and here phrases with enabled markup language:
 
<prosody pitch="0" rate="0">daaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaiiiikikikikikikikklklkllllklklllkoooolokolooooooooooooooooooooooooooooooooooooah</prosody>
 
<voice gender="female">wowowowowowoowowowowowowowoowowowowowowowoowowowowowowowowowo
 
<prosody pitch="95" rate="0.8">dah</br>dah</br>dah</br><prosody pitch="80" rate="0.5">dah<prosody pitch="180" rate="2"></br>dah<prosody pitch="170" rate="0.5">dah</br>
 
one breath from my smoking π -<prosody pitch="80" rate="0">aaaaand<prosody pitch="120" rate="0">wowowow<prosody pitch="200" rate="0">wowowowow<prosody pitch="0" rate="0">wowowwowowowo <prosody pitch="200" rate="0">wowowowowo
 
<prosody pitch="0" rate="0"> lol
 
<prosody pitch="200">lol
 
my motorbike's last moments sounded like <prosody pitch="0" rate="1.2"> oinkoinkoinkoinkoink<prosody pitch="0" rate="0.8"> oinkoinkoinkoinkoink<prosody pitch="0" rate="0.8"> oinkoinkoinkoinkoink<prosody pitch="0" rate="0"> oinkoinkoinkoinkoink err
 
the rain goes <prosody pitch="0" rate="0"> plip<prosody pitch="200">plop<prosody pitch="0" rate="0"> plip<prosody pitch="200">plop<prosody pitch="0" rate="0"> plip<prosody pitch="200">plop<prosody pitch="0" rate="0"> plip<prosody pitch="200">plop

<prosody pitch="0" rate="0"> my <prosody pitch="200">rupteur <prosody pitch="0" rate="0"> is<prosody pitch="200">micro

<prosody pitch="200" rate="0.1"> can - <prosody pitch="100">you hear - the choppers sing: <prosody pitch="0" rate="30"> oinkoinkoinkoinkoinkoinkoinkoinkoinkoinkoinkoinkoinkoinkoinkoinkoinkoinkoinkoinkoinkoinkoinkoinkoinkoinkoinkoinkoinkoinkoinkoinkoi

<prosody pitch="80">so - long - <prosody pitch="30">and thanks <prosody pitch="10"> for all the fish

<prosody pitch="0" rate="0"> ufhfhfhfhfhfhfhfhfhfhfhfjfhfjfhfhfhfhfhfhfhfhfjfjfhfhhfhfufjfhfdhfhfhfhfhfhdhfhfhfhfhfjfjfjfjfffhfhfhfhhfjfhfhfhfhfhfjfjfhfhffhgu


#110518 How to enable markup language tags for the espeak TTS Voice on your robot

Posté par firened - 13 juillet 2020 - 02:42

speech synthesis markup language (SSML) is mostly for making the TTS sing and make it sound silly in various ways. mostly for laughs.
to enable the espeak markup language, add:
,
"CMDTTS": "/usr/bin/espeak -v en -m -f /tmp/tts.txt --stdout | /usr/bin/aplay -D plughw:PLAYBACKDEVICE"
to your /boot/robot.json file. it should look like this:
Screenshot_20200723_204442.jpg
 
"-v" for language: en, fr, de,..
"-m" to enable markup tags.
valid markup tags here: http://espeak.sourceforge.net/ssml.html 
a chat message:
<prosody pitch="0" rate="0"> lol
will say lol in a deep, slow voice.
normal pitch is "50".
normal rate is "1", double speed is "2".
 
see also these TTS related posts:
Amazon Alexa TTS
picoTTS
Google TTS


#110517 How to use the picoTTS Voice on your robot

Posté par firened - 13 juillet 2020 - 01:44

Merci a Pascal pour ce manuel!
Voici comment installer PicoTTS la synthèse vocale d'Android sur un robot : 
wget http://ftp.us.debian.org/debian/pool/non-free/s/svox/libttspico0_1.0+git20130326-9_armhf.deb 
wget http://ftp.us.debian.org/debian/pool/non-free/s/svox/libttspico-utils_1.0+git20130326-9_armhf.deb 
sudo apt-get install -f ./libttspico0_1.0+git20130326-9_armhf.deb ./libttspico-utils_1.0+git20130326-9_armhf.deb  
Et ajouter dans votre /boot/robot.json sans oublier la virgule sur la ligne précédente :
,
"CMDTTS": "cat /tmp/tts.txt | pico2wave -l fr-FR -w /tmp/tts.wav && /usr/bin/aplay -D plughw:PLAYBACKDEVICE /tmp/tts.wav" 
et pour ceux qui trouvent la voix d'Android vraiment trop étoufée il suffit d'ajouter des aigus comme ceci 
sudo apt install sox 
Et la ligne avec le traitement de son devient :
,
"CMDTTS": "cat /tmp/tts.txt | pico2wave -l fr-FR -w /tmp/tts.wav && /usr/bin/sox /tmp/tts.wav /tmp/sox.wav treble +15 && /usr/bin/aplay -D plughw:PLAYBACKDEVICE /tmp/sox.wav"
see all these TTS related posts:
Amazon Alexa TTS
picoTTS
Google TTS
Enable espeak markup language


#110516 Modification leds IR en leds blanches

Posté par firened - 13 juillet 2020 - 01:36

thanks for the writeup Microrupteur! :)

in case someone finds this useful, here is the schematic of the original circuit:
Screenshot_20200616_123831.jpg


#110515 How to use the Google Assistant Voice on your robot

Posté par firened - 13 juillet 2020 - 09:45

here a few fun phrases to make the TTS sing or otherwise mess with it.
because every TTS engine works differently, i will only list phrases that work well for the online google TTS.
add your own fun phrases for this TTS in the comments!
 
I stubbed my toe and said OOOO.OOOOO?OOOOO,OOO;OOOO-OOO!OOOOOO-OOOOOO.OOOO!OOO,OOOOO-OOO?OOOO.OOO;OOOO-OOO.OOOO!OOO?OOOO
 
waca waca waca wacawaca waca waca wacawaca waca waca waca
 
rattatacatakatatatat rattatacatakatatatat rattatacatakatatatat rattatacatakatatatat
 
lll i lii.i lili. l i liii l i llll.lllllilllil.i ll ol ilili l illiiii l i l.i.l.l.i ilil.ii.l
 
ddd.d..ddddd.d-ddh-dd-d-dd:dd:dddddd.d..ddddd.d-ddh-dd-d-dd:dd:dddddd.d..ddddd.d-ddh-dd-d-dd:dd:ddd
 
I like music Musicaeuaeuaeuaeuaeuaeua Musiceuaeuaeuaeuaeuaeuaeuaeuaeuaeuaeuaeuaeuaeuaeu
 
gaga-gaga.gagaga-gag?ag-gaga,gaga!aaga.g:aga,aga,-gaga.gaga.ga.ga,ga!gagahgagaga.g?agag,gaga-gaggaa
 
i went to the dentist and he said to open my mouth and say a aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa a.
 
so i opened my mouth and said:  a aa aa aa aa aa aa aa aa aa aa aa aa a aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa a a aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa
 
cee cee ceeceeceeceeceeceecee cee ceeceeceeceeceeceecee cee ceeceeceeceeceeceecee cee ceeceeceeceeceeceecee cee ceeceeceeceeceecee
 
it hurts when i ππππππππππππππππππππππππππππππππππππππππππππππππ

eueu eueueueu eueu eueueueu eueu eueueueu eueu eueueueu eueu eueueueu eueu eueueueu

my rabbit likes ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^


#110514 How to use the Google Assistant Voice on your robot

Posté par firened - 12 juillet 2020 - 09:21

keep in mind: if you use this, the text you enter will be sent to google.
the google online voice is considered one of the most realistic voices.
you can test the Google Assistant Voice in your browser before installing:
http://translate.google.com/translate_tts?ie=UTF-8&client=tw-ob&tl=Fr&q=Bonjour,%20je%20suis%20un%20petit%20robot%20et%20j%27essaie%20une%20nouveau%20voix%20automatique%20en%20ce%20moment.%20A%20bient%C3%B4t,%20merci

and here is how to install it on a Vigibot Robot:

1) log in to your raspberry via SSH as the user pi.

2) download the TTS script "OgeSS.sh" by running:
wget https://www.robot-maker.com/vigibot/OgeSS.sh
the script splits the input into 100 character chunks, and replaces some characters.
(the original script is from here, i adjusted a couple of lines:
https://elinux.org/R...eech_Synthesis))

3) make the script executable:
chmod +x OgeSS.sh
4) install the mpg123 package by running:
sudo apt-get install mpg123
5) modify your/boot/robot.json by typing:
sudo nano /boot/robot.json
and add the following code into the brackets:
,
"CMDTTS": "/home/pi/OgeSS.sh PLAYBACKDEVICE fr-FR > /dev/null"
it should look like this: (don't forget the , on the previous line)
Screenshot_20200712_202644.jpg

after a client restart the Google Assistant Voice should be activated and speak everything you type in the chat.
if it's your first time using a TTS make sure you have the correct number set in Management -> interface config -> PLAYBACKDEVICE , usually it's 0 or 1.

to change the TTS language, edit your /boot/robot.json file and set your language code.
fr-FR, en-US, de-DE are examples.
all available voices are listed here:
https://cloud.google...ech/docs/voices
 
see also these TTS related posts:
picoTTS
Enable espeak markup language
Amazon alexa TTS

UPDATE 12.2020:
if there's a 2min lag between sending the tts message and the robot speaking it, chances are your network struggles with IPv6 addresses.
check
cat /etc/resolv.conf
and consider disabling IPv6 by editing
sudo nano /etc/sysctl.conf
and adding
#disable ipv6 to stop ipv6 dns leak:
net.ipv6.conf.all.disable_ipv6 = 1
UPDATE 1.2021:
there's a vulnerability in the OgeSS.sh script. simply replace the old script with the fixed version. to update the script to the fixed version, run:
rm OgeSS.sh     
wget https://www.robot-maker.com/vigibot/OgeSS.sh     
chmod +x OgeSS.sh