A Node.js module based on Swift and CoreAudio for capturing Mac system audio streams. Solves the technical challenge of Node.js being unable to directly access macOS system speaker audio streams.
δΈζηζ¬ | English Version
- π― System-level Capture: Uses CoreAudio Process Tap technology to capture audio output from all applications
- β‘ High Performance: Swift native code provides near-C language performance
- π§ Easy API: Provides clean JavaScript interface with Promise and event-driven support
- π΅ Real-time Processing: Supports real-time audio data processing and format conversion
- π Multi-format Output: Supports WAV format audio file output
- π‘οΈ Error Handling: Comprehensive error handling and state management
- π Detailed Logging: Provides detailed debugging and status logs
# Install from GitHub
npm install git+https://github.com/sparticleinc/mac-audio-capture.git
# Or clone repository and install locally
git clone https://github.com/sparticleinc/mac-audio-capture.git
cd mac-audio-capture
npm install
const AudioCapture = require('./lib');
async function captureAudio() {
// Create audio capture instance
const capture = new AudioCapture({
sampleRate: 48000,
channelCount: 2
});
// Listen to events
capture.on('started', () => console.log('ποΈ Started capturing'));
capture.on('stopped', () => console.log('π Stopped capturing'));
capture.on('error', (error) => console.error('β Error:', error.message));
try {
// Record 5 seconds of audio
const filePath = await capture.record(5000, 'output.wav');
console.log('β
Recording completed:', filePath);
} catch (error) {
console.error('Recording failed:', error.message);
}
}
captureAudio();
const AudioCapture = require('./lib');
async function advancedCapture() {
const capture = new AudioCapture();
// Configure audio capture
await capture.configure({
sampleRate: 44100,
channelCount: 1,
logPath: './logs/audio.log'
});
// Start capture
await capture.startCapture({ interval: 100 });
// Real-time audio data processing
capture.on('data', (audioData) => {
console.log(`π Received ${audioData.length} audio segments`);
// Process audio data here
});
// Record for 3 seconds
await new Promise(resolve => setTimeout(resolve, 3000));
// Stop capture
await capture.stopCapture();
// Save as WAV file
const filePath = await capture.saveToWav('advanced-output.wav');
console.log('File saved:', filePath);
}
new AudioCapture(options?: AudioCaptureConfig)
Parameters:
options
(optional): Configuration optionssampleRate
: Sample rate (default: 48000)channelCount
: Number of channels (default: 2)logPath
: Log file path
Configure audio capture
await capture.configure({
sampleRate: 44100,
channelCount: 1,
logPath: './audio.log'
});
Start audio capture
await capture.startCapture({ interval: 100 });
Stop audio capture
await capture.stopCapture();
Record audio for specified duration
const filePath = await capture.record(5000, 'output.wav');
Save audio data as WAV file
const filePath = await capture.saveToWav('output.wav');
Get current audio data
const audioData = capture.getAudioData();
Clear audio buffer
capture.clearBuffer();
configured
: Triggered when configuration is completestarted
: Triggered when capture startsstopped
: Triggered when capture stopsdata
: Triggered when audio data is receivedsaved
: Triggered when file save is completeerror
: Triggered when an error occurs
- Node.js 16+
- macOS 14.4+
- Swift 5.3+
- Xcode Command Line Tools
npm install
# Development build
npm run dev
# Production build
npm run build
npm test
npm run example
npm run format
npm run lint
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β Node.js App β β NAPI Binding β β Swift Module β
β βββββΊβ βββββΊβ β
β JavaScript API β β C++ Interface β β CoreAudio API β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β β
βΌ βΌ
βββββββββββββββββββ βββββββββββββββββββ
β Audio Buffer β β Process Tap β
β β β β
β Base64 Data β β System Audio β
βββββββββββββββββββ βββββββββββββββββββ
- CoreAudio Process Tap: System-level audio capture
- Aggregate Device: Virtual audio device management
- NAPI (Node-API): Cross-language binding
- Event-Driven Architecture: Event-driven architecture
- Real-time Audio Processing: Real-time audio processing
MIT License - see LICENSE file for details
Issues and Pull Requests are welcome!
- Fork this repository
- Create a feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
If you encounter issues or have suggestions, please:
- Check the Issues page
- Create a new Issue
- Contact the maintainer
If you encounter audio permission issues, make sure:
- Allow microphone access in System Preferences
- Allow system audio access in Security & Privacy
If build fails, make sure:
- Xcode Command Line Tools installed:
xcode-select --install
- Swift version >= 5.3:
swift --version
- Node.js version >= 16:
node --version
- CoreAudio - Apple's audio framework
- NAPI - Node.js native API
- Swift NAPI Bindings - Swift and NAPI binding library
Note: This module only supports macOS systems and requires appropriate audio permissions.