Uses voice recognition to transcribe and detect offensive content in audio.
The "OffensiveAudioClassifier" library provides an innovative solution for recognizing offensive content by collecting real-time audio. It harnesses the power of machine learning and Apple's Speech framework to transcribe audio into text and classify it into three categories: "neither" (neutral), "offensive," and "hate" (hate speech).
- Audio Transcription: Transcribes audio into text with high precision, utilizing Apple's Speech framework.
- Offense Detection: Identifies offensive language and hate speech in the transcribed text.
- Machine Learning Model: Implements a robust machine learning model trained with over 35.000 expressions, using Apple's Create ML.
- Supports both SwiftUI and UIKit.
- Does not require internet connection for use.
- English language support only. (Will support more languages in the future)
- Content Moderation: Facilitates content moderation on online platforms such as forums, social networks, and messaging apps.
- Combatting Hate Speech: Protects users against offensive and discriminatory content.
- Enhanced User Experience: Provides a more enjoyable and positive experience for your users.
- iOS 17.0+
- macOS 14.0+
- watchOS 10.0+
- tvOS 17.0+
- File > Add Package Dependencies... >
- Add
https://github.com/erickrib/OffensiveAudioClassifier
OR
Update dependencies
in Package.swift
dependencies: [
.package(url: "https://github.com/erickrib/OffensiveAudioClassifier", .upToNextMajor(from: "1.0.5"))
]
OffensiveAudioClassifier is available through CocoaPods. To install it, simply add the following line to your Podfile:
pod 'OffensiveAudioClassifier'
Next, import the library into your Swift code:
import OffensiveAudioClassifier
For SwiftUI applications, before using any methods of the library, you need to first instantiate the class. You can use the @StateObject property wrapper to manage the lifecycle of the OffensiveAudioClassifier instance:
@StateObject var offensiveClassifier = OffensiveAudioClassifier()
// You can initialize the OffensiveAudioClassifier object with an initial transcript.
@StateObject var offensiveClassifier = OffensiveAudioClassifier(initialTranscript: "example text inital")
// Initiates the transcription process to convert audio to text. Begins collecting audio input from the microphone.
offensiveClassifier.transcribe()
// Stops the transcription process.
offensiveClassifier.stopTranscribing()
// Accesses the machine learning model directly. Detects offensive content by providing a string input.
offensiveClassifier.detectOffensive(message: "exemple text")
// Accesses the text classification property that can indicate "neither," "offensive," or "hate" classifications.
offensiveClassifier.textClassification
// Property provides the real-time transcription of audio collected from the microphone.
offensiveClassifier.transcript
Before using any method from OffensiveAudioClassifier, instantiate the main class as a state object to manage the library's lifecycle.
import UIKit
import OffensiveAudioClassifier
class ViewController: UIViewController {
var offensiveClassifier: OffensiveAudioClassifier?
override func viewDidLoad() {
super.viewDidLoad()
offensiveClassifier = OffensiveAudioClassifier()
}
}
The OffensiveAudioClassifier class provides two properties: transcript and textClassification. You can observe these properties to get real-time updates on the transcribed text and its classification.
To observe these properties, you can create a custom observer by conforming to the OffensiveAudioClassifierDelegate protocol:
import UIKit
import OffensiveAudioClassifier
class ViewController: UIViewController, OffensiveAudioClassifierDelegate {
var offensiveClassifier: OffensiveAudioClassifier?
override func viewDidLoad() {
super.viewDidLoad()
offensiveClassifier = OffensiveAudioClassifier()
// Sets the delegate of the offensiveClassifier to the current object.
offensiveClassifier?.delegate = self
}
func updateTranscript(_ text: String) {
// Handle the updated transcript
}
func updateOffensiveText(_ text: String) {
// Handle the updated offensive text classification
}
}
To start transcribing audio, call the transcribe() method
offensiveClassifier?.transcribe()
To stop transcribing, call the stopTranscribing() method:
offensiveClassifier?.stopTranscribing()
To use the voice recognition and audio recording features, your application must request and obtain the necessary permissions from the user. Ensure that you have added the NSMicrophoneUsageDescription and NSSpeechRecognitionUsageDescription keys to your application's Info.plist file with appropriate descriptions.
Key | Type | Value |
---|---|---|
Privacy - Speech Recognition Usage Description | String | This app needs access to the microphone to record and share audio in real-time. |
Privacy - Microphone Usage Description | String | This app requires access to speech recognition to transcribe and analyze audio content. |
Attention: To use the voice recognition and audio recording features, your application must request and obtain the necessary permissions from the user. Without these settings, the library will not function correctly.