ALSpeechRecognition API¶
NAOqi Audio - Overview | API | Tutorial
Namespace : AL
#include <alproxies/alspeechrecognitionproxy.h>
Method list¶
As any module, this module inherits methods from ALModule API. It also has the following specific methods:
-
class
ALSpeechRecognitionProxy
¶
ALSpeechRecognitionProxy::getAvailableLanguages
ALSpeechRecognitionProxy::getLanguage
ALSpeechRecognitionProxy::setLanguage
ALSpeechRecognitionProxy::getParameter
ALSpeechRecognitionProxy::setParameter
ALSpeechRecognitionProxy::loadVocabulary
(deprecated)ALSpeechRecognitionProxy::getAudioExpression
ALSpeechRecognitionProxy::setAudioExpression
ALSpeechRecognitionProxy::setVisualExpression
ALSpeechRecognitionProxy::setVocabulary
ALSpeechRecognitionProxy::setWordListAsVocabulary
(deprecated)ALSpeechRecognitionProxy::compile
ALSpeechRecognitionProxy::addContext
ALSpeechRecognitionProxy::removeContext
ALSpeechRecognitionProxy::removeAllContext
ALSpeechRecognitionProxy::saveContextSet
ALSpeechRecognitionProxy::loadContextSet
ALSpeechRecognitionProxy::eraseContextSet
ALSpeechRecognitionProxy::activateRule
ALSpeechRecognitionProxy::deactivateRule
ALSpeechRecognitionProxy::activateAllRules
ALSpeechRecognitionProxy::deactivateAllRules
ALSpeechRecognitionProxy::addWordListToSlot
ALSpeechRecognitionProxy::removeWordListFromSlot
ALSpeechRecognitionProxy::getRules
ALSpeechRecognitionProxy::pause
ALSpeechRecognitionProxy::subscribe
ALSpeechRecognitionProxy::unsubscribe
Event list¶
Methods¶
-
std::vector<std::string>
ALSpeechRecognitionProxy::
getAvailableLanguages
()¶ Returns the list of the languages currently installed on the system.
Returns: A list of languages. Example: [‘French’, ‘Chinese’, ‘English’, ‘Japanese’]
See also:
-
std::string
ALSpeechRecognitionProxy::
getLanguage
()¶ Returns the language currently used by the speech recognition system.
Returns: a language name. Example: ‘French’
Could be one of the installed languages:
ALSpeechRecognitionProxy::getAvailableLanguages
See also:
-
void
ALSpeechRecognitionProxy::
setLanguage
(const std::string& language)¶ Sets the language used by the speech recognition system for the current application.
The setting will come back to the preferred language at the end of the application.
For further details, see: Setting NAO’s preferred language Setting Pepper’s preferred language.
Parameters: - language –
Name of one of the available languages.
Example: ‘French’
- language –
-
float
ALSpeechRecognitionProxy::
getParameter
(const std::string& parameter)¶ Gets a parameter of the speech recognition engine.
Parameters: - parameter – Name of the parameter
Returns: Value of the parameter
-
void
ALSpeechRecognitionProxy::
setParameter
(const std::string& parameter, const float& value)¶ Sets parameters of the speech recognition engine.
Parameters: - parameter – Name of the parameter.
- value – Value of the parameter.
Supported parameters:
- Sensitivity: Value between 0 and 1 setting the sensitivity of the voice activity detector used by the engine.
- NbHypotheses: Number of hypotheses returned by the engine. Default: 1
-
void
ALSpeechRecognitionProxy::
loadVocabulary
(const std::string& pathToGrammarfile)¶ Deprecated since version 1.20: use
ALSpeechRecognitionProxy::setVocabulary
instead.Loads the vocabulary to recognize contained in a .lcf or .fcf file (NUANCE grammar file format).
Parameters: - pathToGrammarfile – Path to the .lcf or .fcf file containing the vocabulary
-
bool
ALSpeechRecognitionProxy::
getAudioExpression
()¶ Gets the value of the parameter AudioExpression. This parameter indicates if the recognition process plays a “bip” or not.
-
void
ALSpeechRecognitionProxy::
setAudioExpression
(const bool& setOrNot)¶ When set to True, a “bip” is played at the beginning of the recognition process, and another “bip” is played at the end of the process. This is a useful indication to let the user know when it is appropriate to speak.
Parameters: - setOrNot – Enable (true) or disable it (false)
-
void
ALSpeechRecognitionProxy::
setVisualExpression
(const bool& setOrNot)¶ Enables or disables the LEDs animations showing the state of the recognition engine during the recognition process.
Parameters: - setOrNot – Enable (true) or disable it (false).
-
void
ALSpeechRecognitionProxy::
setVocabulary
(const std::vector<std::string>& vocabulary, const bool& enableWordSpotting)¶ Sets the list of words/phrases (vocabulary) that should be recognized by the speech recognition engine. If word spotting is disabled (default), the engine expects to hear one of the specified words, nothing more, nothing less. If enabled, the specified words can be pronounced in the middle of a whole speech stream : the engine will try to spot them. The parameter enableWordSpotting changes the results given by the speech recognition. Please refer to ALSpeechRecognition for details.
Parameters: - vocabulary – List of words that should be recognized
- enableWordSpotting – Enable (true) or disable it (false)
-
void
ALSpeechRecognitionProxy::
setWordListAsVocabulary
(const std::vector<std::string>& vocabulary)¶ Deprecated since version 1.20: use
ALSpeechRecognitionProxy::setVocabulary
instead.Sets the list of words/phrases (vocabulary) that should be recognized by the speech recognition engine. To enable “word spotting”, please use
ALSpeechRecognitionProxy::setVocabulary
instead.Parameters: - vocabulary – List of words that should be recognized
-
void
ALSpeechRecognitionProxy::
compile
(const std::string& pathToInputBNFFile, const std::string& pathToOutputLCFFile, const std::string& language)¶ Converts a BNF file to a LCF file. The LCF file is a binary file which contains the same content as the BNF file. Use this file for the method addContext.
Parameters: - pathToInputBNFFile – Path to a BNF input file. This BNF file is a set of rules that should be recognized by the speech recognition engine.
- pathToOutputLCFFile – Path where the LCF file will be generated.
- language – Name of the language of the BNF file.
-
void
ALSpeechRecognitionProxy::
addContext
(const std::string& pathToLCFFile, const std::string& contextName)¶ Adds the context contained in the LCF file. This LCF file contains the set of rules that should be recognized by the speech recognition engine.
Parameters: - pathToLCFFile – Path to LCF file to use.
- contextName – Name of the context.
-
void
ALSpeechRecognitionProxy::
removeContext
(const std::string& contextName)¶ Removes one context from the speech recognition engine.
Parameters: - contextName – Name of the context to remove.
-
void
ALSpeechRecognitionProxy::
removeAllContext
()¶ Removes all contexts from the speech recognition engine.
-
float
ALSpeechRecognitionProxy::
saveContextSet
(const std::string& saveName)¶ Saves the current context set under the name saveName.
Saved context sets are lost when restarting NaoQi.
-
float
ALSpeechRecognitionProxy::
loadContextSet
(const std::string& saveName)¶ Replaces the currently loaded context set by the one previously saved under the name saveName.
Note: Reloading a saved context does not reset its state; i.e. changes made since the last save, to its activated rules or slots, are not erased.
-
float
ALSpeechRecognitionProxy::
eraseContextSet
(const std::string& saveName)¶ Erases the save named saveName. This will not remove any currently loaded contexts.
-
float
ALSpeechRecognitionProxy::
activateRule
(const std::string& contextName, const std::string& ruleName)¶ Activates a rule contained in the specified context.
Parameters: - contextName – Name of the context to modify.
- ruleName – Name of the rule to activate.
-
float
ALSpeechRecognitionProxy::
deactivateRule
(const std::string& contextName, const std::string& ruleName)¶ Deactivates a rule contained in the specified context.
Parameters: - contextName – Name of the context to modify.
- ruleName – Name of the rule to deactivate.
-
float
ALSpeechRecognitionProxy::
activateAllRules
(const std::string& contextName)¶ Activates all rules contained in the specified context.
Parameters: - contextName – Name of the context to modify.
-
float
ALSpeechRecognitionProxy::
deactivateAllRules
(const std::string& contextName)¶ Deactivates all rules contained in the specified context.
Parameters: - contextName – Name of the context to modify.
-
float
ALSpeechRecognitionProxy::
addWordListToSlot
(const std::string& contextName, const std::string& slotName, const std::vector<std::string>& wordList)¶ Adds a list of words in a slot. A slot is a part of a context which can be modified. You can add a list of words that should be recognized by the speech recognition engine.
Parameters: - contextName – Name of the context to modify.
- slotName – Name of the slot to modify.
- wordList – List of words to insert in the slot.
-
float
ALSpeechRecognitionProxy::
removeWordListFromSlot
(const std::string& contextName, const std::string& slotName)¶ Removes all words from a slot.
Parameters: - contextName – Name of the context to modify.
- slotName – Name of the slot to modify.
-
std::vector<std::string>
ALSpeechRecognitionProxy::
getRules
(const std::string& contextName, const std::string& typeName)¶ Gets rules corresponding to the specified type. Type can be:
- “start”: provides entry points into a context
- “active”: state of a rule, indicates whether the rule is activated or not
- “activatable”: specifies a rule which can be activated or deactivated
- “slot”: those rules can be changed during the runtime
Parameters: - contextName – Name of the context.
- typeName – Type of the rules ordered.
-
float
ALSpeechRecognitionProxy::
pause
(const bool& isPaused)¶ Stops and restarts the speech recognition engine according to the input parameter.
For example, this can be used to add contexts, activate or deactivate rules of a context, add words to a slot.
Parameters: - isPaused – True (stops ASR) or False (restarts ASR).
-
void
ALSpeechRecognitionProxy::
subscribe
(const std::string& name)¶ Subscribes to ALSpeechRecognition. This causes the module to start writing information to ALMemory in “WordRecognized”. This can be accessed in ALMemory using
ALMemoryProxy::getData
.Parameters: - name – Name to identify the subscriber
-
void
ALSpeechRecognitionProxy::
unsubscribe
(const std::string& name)¶ Unsubscribes to ALSpeechRecognition. This causes the module to stop writing information to ALMemory in “WordRecognized”.
Parameters: - name – Name to identify the subscriber (as used in
ALSpeechRecognitionProxy::subscribe
).
- name – Name to identify the subscriber (as used in
Events¶
-
Event:callback(std::string eventName, AL::ALValue value, std::string subscriberIdentifier)¶
"WordRecognized"
Raised when one of the specified words set with
ALSpeechRecognitionProxy::setVocabulary
has been recognized. When no word is currently recognized, this value is reinitialized.Parameters: - eventName (std::string) – “WordRecognized”
- value – Recognized words info. Please refer to ALSpeechRecognition for details.
- subscriberIdentifier (std::string) –
-
Event:callback(std::string eventName, AL::ALValue value, std::string subscriberIdentifier)¶
"WordRecognizedAndGrammar"
Raised when the engine produces a result. Same as WordRecognized with an additional information, the name of the grammar used for the recognition.
Parameters: - eventName (std::string) – “WordRecognizedAndGrammar”
- value – Recognized words info. Please refer to ALSpeechRecognition for details.
- subscriberIdentifier (std::string) –
-
Event:callback(std::string eventName, AL::ALValue value, std::string subscriberIdentifier)¶
"LastWordRecognized"
Deprecated since version 1.20.
Raised when one of the specified words with
ALSpeechRecognitionProxy::setWordListAsVocabulary
has been recognized. This value is kept unchanged until a new word has been recognized.Parameters: - eventName (std::string) – “LastWordRecognized”
- value – Last recognized words infos. Please refer to ALSpeechRecognition for details.
- subscriberIdentifier (std::string) –
-
Event:callback(std::string eventName, bool value, std::string subscriberIdentifier)¶
"SpeechDetected"
Raised when the automatic speech recognition engine has detected a voice activity.
Parameters: - eventName (std::string) – “SpeechDetected”
- value – True if a voice activity is detected.
- subscriberIdentifier (std::string) –
-
Event:callback(std::string eventName, bool value, std::string subscriberIdentifier)¶
"ALSpeechRecognition/IsRunning"
Raised when the speech recognition engine is started.
Parameters: - eventName (std::string) – “ALSpeechRecognition/IsRunning”
- value – True if the speech recognition engine is started.
- subscriberIdentifier (std::string) –
-
Event:callback(std::string eventName, AL::ALValue status, std::string subscriberIdentifier)¶
"ALSpeechRecognition/Status"
Raised when the status of the speech recognition engine changes.
Parameters: - eventName (std::string) – “ALSpeechRecognition/Status”
- status –
can be “Idle”, “ListenOn”, “SpeechDetected”, “EndOfProcess”, “ListenOff”, “Stop”.
Note: “ListenOn” status does not necessarily mean ready to process. For further details, see:
ALSpeechRecognition/ActiveListening()
. - subscriberIdentifier (std::string) –
-
Event:callback(std::string eventName, bool value, std::string subscriberIdentifier)¶
"ALSpeechRecognition/ActiveListening"
Experimental
Raised with True value when the engine is not only listening but also ready to process data (i.e. not raised when the ASR engine is only recording sound to be processed).
Parameters: - eventName (std::string) – “ALSpeechRecognition/ActiveListening”
- value – True if the engine is listening and processing data, False otherwise.
- subscriberIdentifier (std::string) –