Can I use the Azure Speech resource without an endpoint?
I was working through the microsoft tutorials. Specifically, I was completing this lab. And in this lab the application client of the Speech resource does not use the endpoint, it only uses the key and location.
The same is told here in the documentation:
To find the keys and location/region of a completed deployment, follow these steps:
1. Sign in to the Azure portal using your Microsoft account.
2. Select All resources, and select the name of your Cognitive Services resource.
3. On the left pane, under RESOURCE MANAGEMENT, select Keys and Endpoint.
Each subscription has two keys; you can use either key in your application. To copy/paste a key to your code editor or other location, select the copy button next to each key, switch windows to paste the clipboard contents to the desired location.
Additionally, copy the LOCATION value, which is your region ID (ex. westus, westeurope) for SDK calls.
As you can see there is nothing told about the endpoint. Meaning that somehow the Speech resource client will know how to connect to the Speech resource by having just the key and location.
I am really confused, because I thought it should be impossible to do without an endpoint.
E.g. here the code samples which use the Speech resource do not use any endpoints (only a key and a location):
import os
from playsound import playsound
from azure.cognitiveservices.speech import SpeechConfig, SpeechRecognizer, AudioConfig
# Get spoken command from audio file
file_name = 'light-on.wav'
audio_file = os.path.join('data', 'speech', file_name)
# Configure speech recognizer
speech_config = SpeechConfig(cog_key, cog_location)
audio_config = AudioConfig(filename=audio_file) # Use file instead of default (microphone)
speech_recognizer = SpeechRecognizer(speech_config, audio_config)
# Use a one-time, synchronous call to transcribe the speech
speech = speech_recognizer.recognize_once()
# Play the original audio file
playsound(audio_file)
# Show transcribed text from audio file
print(speech.text)
I mean I can not even imagine how the Speech resource client (implemented by Microsoft) knows that it should connect to the resource in my Azure portal and not in some other portal without the endpoint. Looks like a magic to me, so I am definitely missing something here.
Thank you for trying Azure Speech service.
You're absolutely right - Speech service does use endpoints like any other cloud service of this kind.
If your code is using Speech SDK, then SDK provides the right endpoint for you, based upon the information you have provided, namely location.
I see from your code, that you are trying On-line transcription. Here you will find all regional endpoints used in this scenario.
There are other endpoints, like for Speech-to-text REST API V3 or Text-to-speech. They are all described in the documentation.