.Make certain compatibility along with numerous platforms, including.NET 6.0,. Internet Framework 4.6.2, and.NET Standard 2.0 as well as above.Minimize reliances to prevent variation conflicts as well as the demand for tiing redirects.Transcribing Sound Files.One of the key functions of the SDK is audio transcription. Designers may transcribe audio reports asynchronously or even in real-time. Below is an example of how to translate an audio file:.utilizing AssemblyAI.utilizing AssemblyAI.Transcripts.var customer = brand-new AssemblyAIClient(" YOUR_API_KEY").var records = await client.Transcripts.TranscribeAsync( brand new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3". ).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).For local data, similar code can be made use of to attain transcription.wait for making use of var stream = new FileStream("./ nbc.mp3", FileMode.Open).var records = wait for client.Transcripts.TranscribeAsync(.flow,.new TranscriptOptionalParams.LanguageCode = TranscriptLanguageCode.EnUs.).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).Real-Time Sound Transcription.The SDK additionally sustains real-time sound transcription making use of Streaming Speech-to-Text. This feature is actually especially valuable for applications requiring prompt handling of audio information.using AssemblyAI.Realtime.wait for making use of var scribe = new RealtimeTranscriber( brand-new RealtimeTranscriberOptions.ApiKey="YOUR_API_KEY",.SampleRate = 16_000. ).transcriber.PartialTranscriptReceived.Subscribe( transcript =>Console.WriteLine($" Limited: transcript.Text "). ).transcriber.FinalTranscriptReceived.Subscribe( transcript =>Console.WriteLine($" Ultimate: transcript.Text "). ).await transcriber.ConnectAsync().// Pseudocode for receiving audio coming from a mic for example.GetAudio( async (part) => wait for transcriber.SendAudioAsync( piece)).await transcriber.CloseAsync().Using LeMUR for LLM Apps.The SDK integrates along with LeMUR to enable developers to create big foreign language style (LLM) functions on vocal data. Listed here is actually an instance:.var lemurTaskParams = brand-new LemurTaskParams.Cause="Deliver a quick conclusion of the transcript.",.TranscriptIds = [transcript.Id],.FinalModel = LemurModel.AnthropicClaude3 _ 5_Sonnet..var feedback = await client.Lemur.TaskAsync( lemurTaskParams).Console.WriteLine( response.Response).Audio Knowledge Designs.Additionally, the SDK comes with built-in assistance for audio knowledge versions, making it possible for belief analysis as well as other innovative components.var records = await client.Transcripts.TranscribeAsync( new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3",.SentimentAnalysis = correct. ).foreach (var lead to transcript.SentimentAnalysisResults!).Console.WriteLine( result.Text).Console.WriteLine( result.Sentiment)// POSITIVE, NEUTRAL, or NEGATIVE.Console.WriteLine( result.Confidence).Console.WriteLine($" Timestamp: result.Start - result.End ").To find out more, see the formal AssemblyAI blog.Image source: Shutterstock.