Back to Blog
Implementation Guide
Tutorial

Building Client-Side Music Analysis: A Complete Implementation Guide

Step-by-step guide to building a production-ready music analysis application using WebAssembly, from setup to deployment on Cloudflare Pages.

James Wilson, Full-Stack Developer
January 29, 2025
30 min read
Music waveform visualization

Project Overview

We'll build a complete client-side music analysis application that runs entirely in the browser. Users can upload audio files, visualize spectrograms, extract features, separate sources, and transcribe music—all without sending data to a server.

What We're Building

Audio Decoder

Support for MP3, FLAC, WAV, OGG

Spectrogram Analyzer

Real-time frequency visualization

Feature Extraction

MFCC, Chroma, Beat detection

Source Separation

AI-powered stem isolation

Step 1: Project Setup

Initialize Next.js Project
1# Create new Next.js project with TypeScript
2npx create-next-app@latest music-analysis --typescript --tailwind --app
3
4# Navigate to project
5cd music-analysis
6
7# Install dependencies
8npm install @wasm-audio-decoders/flac @wasm-audio-decoders/mpeg
9npm install onnxruntime-web
10npm install plotly.js-dist react-plotly.js
11npm install @types/plotly.js
12
13# Install UI components
14npm install @radix-ui/react-slot @radix-ui/react-tabs
15npm install class-variance-authority clsx tailwind-merge
16npm install lucide-react

Configure Next.js for static export and WASM support:

next.config.js
1/** @type {import('next').NextConfig} */
2const nextConfig = {
3  output: 'export',
4  
5  // Enable WASM support
6  webpack: (config, { isServer }) => {
7    config.experiments = {
8      ...config.experiments,
9      asyncWebAssembly: true,
10      layers: true,
11    };
12    
13    // Handle WASM files
14    config.module.rules.push({
15      test: /.wasm$/,
16      type: 'asset/resource',
17    });
18    
19    // Fix for essentia.js
20    if (!isServer) {
21      config.resolve.fallback = {
22        ...config.resolve.fallback,
23        fs: false,
24        path: false,
25        crypto: false,
26      };
27    }
28    
29    return config;
30  },
31  
32  // Optimize for client-side only
33  experimental: {
34    optimizeCss: true,
35  },
36};
37
38module.exports = nextConfig;

Step 2: Core Audio Processing Module

lib/audio/AudioProcessor.ts
1// Core audio processing class
2export class AudioProcessor {
3  private audioContext: AudioContext;
4  private decoder: any;
5  private essentia: any;
6  private initialized = false;
7  
8  constructor() {
9    this.audioContext = new (window.AudioContext || 
10                            (window as any).webkitAudioContext)();
11  }
12  
13  async initialize() {
14    if (this.initialized) return;
15    
16    // Dynamic imports for code splitting
17    const [decoderModule, essentiaModule] = await Promise.all([
18      import('@wasm-audio-decoders/flac'),
19      this.loadEssentia(),
20    ]);
21    
22    this.decoder = new decoderModule.FLACDecoder();
23    await this.decoder.ready;
24    
25    this.essentia = essentiaModule;
26    this.initialized = true;
27  }
28  
29  private async loadEssentia() {
30    // Load essentia.js from CDN
31    return new Promise((resolve) => {
32      const script = document.createElement('script');
33      script.src = 'https://unpkg.com/essentia.js@0.1.3/dist/essentia-wasm.js';
34      script.onload = () => {
35        const script2 = document.createElement('script');
36        script2.src = 'https://unpkg.com/essentia.js@0.1.3/dist/essentia.js-core.js';
37        script2.onload = () => {
38          resolve((window as any).Essentia);
39        };
40        document.head.appendChild(script2);
41      };
42      document.head.appendChild(script);
43    });
44  }
45  
46  async decodeAudioFile(file: File): Promise<AudioData> {
47    await this.initialize();
48    
49    const arrayBuffer = await file.arrayBuffer();
50    const uint8Array = new Uint8Array(arrayBuffer);
51    
52    // Select decoder based on file type
53    let decodedData;
54    if (file.name.endsWith('.mp3')) {
55      const { MPEGDecoder } = await import('@wasm-audio-decoders/mpeg');
56      const mpegDecoder = new MPEGDecoder();
57      await mpegDecoder.ready;
58      decodedData = await mpegDecoder.decodeFile(uint8Array);
59      mpegDecoder.free();
60    } else if (file.name.endsWith('.flac')) {
61      decodedData = await this.decoder.decodeFile(uint8Array);
62    } else {
63      // Use Web Audio API for other formats
64      const audioBuffer = await this.audioContext.decodeAudioData(arrayBuffer);
65      decodedData = {
66        channelData: [audioBuffer.getChannelData(0)],
67        sampleRate: audioBuffer.sampleRate,
68        numberOfChannels: audioBuffer.numberOfChannels,
69      };
70    }
71    
72    return {
73      pcmData: decodedData.channelData[0],
74      sampleRate: decodedData.sampleRate,
75      duration: decodedData.channelData[0].length / decodedData.sampleRate,
76    };
77  }
78  
79  computeSpectrogram(audioData: AudioData): Spectrogram {
80    const { pcmData, sampleRate } = audioData;
81    const fftSize = 2048;
82    const hopSize = 512;
83    const spectrogram: number[][] = [];
84    
85    // Hann window function
86    const window = new Float32Array(fftSize);
87    for (let i = 0; i < fftSize; i++) {
88      window[i] = 0.5 - 0.5 * Math.cos(2 * Math.PI * i / (fftSize - 1));
89    }
90    
91    // Compute STFT
92    for (let i = 0; i <= pcmData.length - fftSize; i += hopSize) {
93      const frame = pcmData.slice(i, i + fftSize);
94      const windowedFrame = frame.map((s, j) => s * window[j]);
95      
96      // FFT using Web Audio API's AnalyserNode
97      const fftData = this.computeFFT(windowedFrame);
98      spectrogram.push(fftData);
99    }
100    
101    return {
102      data: spectrogram,
103      sampleRate,
104      fftSize,
105      hopSize,
106      timeStep: hopSize / sampleRate,
107      frequencyBins: fftSize / 2,
108    };
109  }
110  
111  private computeFFT(frame: Float32Array): number[] {
112    // Create offline context for FFT
113    const offlineContext = new OfflineAudioContext(
114      1, frame.length, this.audioContext.sampleRate
115    );
116    
117    const buffer = offlineContext.createBuffer(
118      1, frame.length, this.audioContext.sampleRate
119    );
120    buffer.copyToChannel(frame, 0);
121    
122    const source = offlineContext.createBufferSource();
123    source.buffer = buffer;
124    
125    const analyser = offlineContext.createAnalyser();
126    analyser.fftSize = frame.length;
127    
128    source.connect(analyser);
129    analyser.connect(offlineContext.destination);
130    
131    const frequencyData = new Float32Array(analyser.frequencyBinCount);
132    analyser.getFloatFrequencyData(frequencyData);
133    
134    return Array.from(frequencyData);
135  }
136}
137
138interface AudioData {
139  pcmData: Float32Array;
140  sampleRate: number;
141  duration: number;
142}
143
144interface Spectrogram {
145  data: number[][];
146  sampleRate: number;
147  fftSize: number;
148  hopSize: number;
149  timeStep: number;
150  frequencyBins: number;
151}

Step 3: React Components

components/AudioUploader.tsx
1'use client';
2
3import { useState, useRef } from 'react';
4import { Upload, Music, Loader2 } from 'lucide-react';
5import { Button } from '@/components/ui/button';
6import { Card } from '@/components/ui/card';
7
8interface AudioUploaderProps {
9  onFileSelect: (file: File) => void;
10  isProcessing?: boolean;
11}
12
13export function AudioUploader({ onFileSelect, isProcessing }: AudioUploaderProps) {
14  const [dragActive, setDragActive] = useState(false);
15  const [selectedFile, setSelectedFile] = useState<File | null>(null);
16  const inputRef = useRef<HTMLInputElement>(null);
17  
18  const handleDrag = (e: React.DragEvent) => {
19    e.preventDefault();
20    e.stopPropagation();
21    if (e.type === 'dragenter' || e.type === 'dragover') {
22      setDragActive(true);
23    } else if (e.type === 'dragleave') {
24      setDragActive(false);
25    }
26  };
27  
28  const handleDrop = (e: React.DragEvent) => {
29    e.preventDefault();
30    e.stopPropagation();
31    setDragActive(false);
32    
33    if (e.dataTransfer.files && e.dataTransfer.files[0]) {
34      const file = e.dataTransfer.files[0];
35      if (file.type.startsWith('audio/')) {
36        setSelectedFile(file);
37        onFileSelect(file);
38      }
39    }
40  };
41  
42  const handleChange = (e: React.ChangeEvent<HTMLInputElement>) => {
43    e.preventDefault();
44    if (e.target.files && e.target.files[0]) {
45      const file = e.target.files[0];
46      setSelectedFile(file);
47      onFileSelect(file);
48    }
49  };
50  
51  return (
52    <Card
53      className={`relative p-8 border-2 border-dashed transition-all ${
54        dragActive ? 'border-amber-400 bg-amber-400/10' : 'border-slate-700 bg-slate-800/50'
55      }`}
56      onDragEnter={handleDrag}
57      onDragLeave={handleDrag}
58      onDragOver={handleDrag}
59      onDrop={handleDrop}
60    >
61      <input
62        ref={inputRef}
63        type="file"
64        accept="audio/*"
65        onChange={handleChange}
66        className="hidden"
67      />
68      
69      <div className="text-center">
70        {isProcessing ? (
71          <Loader2 className="w-12 h-12 mx-auto mb-4 text-amber-400 animate-spin" />
72        ) : (
73          <Music className="w-12 h-12 mx-auto mb-4 text-gray-400" />
74        )}
75        
76        {selectedFile ? (
77          <div className="space-y-2">
78            <p className="text-white font-medium">{selectedFile.name}</p>
79            <p className="text-gray-400 text-sm">
80              {(selectedFile.size / 1024 / 1024).toFixed(2)} MB
81            </p>
82            {!isProcessing && (
83              <Button
84                onClick={() => inputRef.current?.click()}
85                variant="outline"
86                className="mt-4"
87              >
88                Choose Different File
89              </Button>
90            )}
91          </div>
92        ) : (
93          <>
94            <p className="text-gray-300 mb-2">
95              Drag and drop your audio file here, or
96            </p>
97            <Button
98              onClick={() => inputRef.current?.click()}
99              className="bg-amber-500 hover:bg-amber-600 text-black"
100            >
101              <Upload className="w-4 h-4 mr-2" />
102              Browse Files
103            </Button>
104            <p className="text-gray-500 text-sm mt-4">
105              Supports MP3, FLAC, WAV, OGGMax 50MB
106            </p>
107          </>
108        )}
109      </div>
110    </Card>
111  );
112}

Step 4: Visualization Components

components/SpectrogramVisualizer.tsx
1'use client';
2
3import dynamic from 'next/dynamic';
4import { useMemo } from 'react';
5
6// Dynamic import for Plotly (client-side only)
7const Plot = dynamic(() => import('react-plotly.js'), { ssr: false });
8
9interface SpectrogramVisualizerProps {
10  spectrogram: number[][];
11  sampleRate: number;
12  hopSize: number;
13}
14
15export function SpectrogramVisualizer({
16  spectrogram,
17  sampleRate,
18  hopSize,
19}: SpectrogramVisualizerProps) {
20  const plotData = useMemo(() => {
21    // Convert to dB scale
22    const spectrogramDB = spectrogram.map(frame =>
23      frame.map(value => 20 * Math.log10(Math.max(value, 1e-10)))
24    );
25    
26    // Create time and frequency axes
27    const timeAxis = Array.from(
28      { length: spectrogram.length },
29      (_, i) => (i * hopSize) / sampleRate
30    );
31    
32    const freqAxis = Array.from(
33      { length: spectrogram[0].length },
34      (_, i) => (i * sampleRate) / (2 * spectrogram[0].length)
35    );
36    
37    return [{
38      type: 'heatmap',
39      z: spectrogramDB[0].map((_, colIndex) =>
40        spectrogramDB.map(row => row[colIndex])
41      ),
42      x: timeAxis,
43      y: freqAxis,
44      colorscale: 'Viridis',
45      colorbar: {
46        title: 'Magnitude (dB)',
47        titleside: 'right',
48      },
49    }];
50  }, [spectrogram, sampleRate, hopSize]);
51  
52  const layout = {
53    title: 'Spectrogram',
54    xaxis: {
55      title: 'Time (s)',
56      color: '#fff',
57    },
58    yaxis: {
59      title: 'Frequency (Hz)',
60      color: '#fff',
61      type: 'log',
62    },
63    paper_bgcolor: 'rgba(0,0,0,0)',
64    plot_bgcolor: 'rgba(0,0,0,0.1)',
65    font: {
66      color: '#fff',
67    },
68    margin: {
69      l: 60,
70      r: 60,
71      t: 40,
72      b: 40,
73    },
74  };
75  
76  return (
77    <div className="w-full h-[400px] bg-slate-800/50 rounded-lg p-4">
78      <Plot
79        data={plotData}
80        layout={layout}
81        config={{
82          responsive: true,
83          displayModeBar: false,
84        }}
85        className="w-full h-full"
86      />
87    </div>
88  );
89}

Step 5: Main Application Page

app/tools/analyzer/page.tsx
1'use client';
2
3import { useState } from 'react';
4import { AudioUploader } from '@/components/AudioUploader';
5import { SpectrogramVisualizer } from '@/components/SpectrogramVisualizer';
6import { AudioProcessor } from '@/lib/audio/AudioProcessor';
7import { Card, CardContent, CardHeader, CardTitle } from '@/components/ui/card';
8import { Tabs, TabsContent, TabsList, TabsTrigger } from '@/components/ui/tabs';
9import { Button } from '@/components/ui/button';
10import { Download, Play, Pause } from 'lucide-react';
11
12export default function AnalyzerPage() {
13  const [audioData, setAudioData] = useState<any>(null);
14  const [spectrogram, setSpectrogram] = useState<any>(null);
15  const [features, setFeatures] = useState<any>(null);
16  const [isProcessing, setIsProcessing] = useState(false);
17  const [isPlaying, setIsPlaying] = useState(false);
18  
19  const processor = new AudioProcessor();
20  
21  const handleFileSelect = async (file: File) => {
22    setIsProcessing(true);
23    
24    try {
25      // Decode audio
26      const data = await processor.decodeAudioFile(file);
27      setAudioData(data);
28      
29      // Compute spectrogram
30      const spec = processor.computeSpectrogram(data);
31      setSpectrogram(spec);
32      
33      // Extract features
34      const feat = await processor.extractFeatures(data);
35      setFeatures(feat);
36      
37    } catch (error) {
38      console.error('Processing error:', error);
39    } finally {
40      setIsProcessing(false);
41    }
42  };
43  
44  const playAudio = () => {
45    if (!audioData) return;
46    
47    const audioContext = new AudioContext();
48    const buffer = audioContext.createBuffer(
49      1,
50      audioData.pcmData.length,
51      audioData.sampleRate
52    );
53    
54    buffer.copyToChannel(audioData.pcmData, 0);
55    
56    const source = audioContext.createBufferSource();
57    source.buffer = buffer;
58    source.connect(audioContext.destination);
59    
60    source.onended = () => setIsPlaying(false);
61    
62    if (isPlaying) {
63      source.stop();
64      setIsPlaying(false);
65    } else {
66      source.start();
67      setIsPlaying(true);
68    }
69  };
70  
71  const downloadResults = () => {
72    const results = {
73      duration: audioData?.duration,
74      sampleRate: audioData?.sampleRate,
75      features: features,
76    };
77    
78    const blob = new Blob([JSON.stringify(results, null, 2)], {
79      type: 'application/json',
80    });
81    
82    const url = URL.createObjectURL(blob);
83    const a = document.createElement('a');
84    a.href = url;
85    a.download = 'audio-analysis.json';
86    a.click();
87    URL.revokeObjectURL(url);
88  };
89  
90  return (
91    <div className="min-h-screen bg-gradient-to-br from-slate-900 via-purple-900 to-slate-900">
92      <div className="max-w-7xl mx-auto px-4 py-12">
93        <h1 className="text-4xl font-bold text-white mb-8">
94          Audio Analyzer
95        </h1>
96        
97        <div className="grid grid-cols-1 lg:grid-cols-3 gap-8">
98          <div className="lg:col-span-1">
99            <AudioUploader
100              onFileSelect={handleFileSelect}
101              isProcessing={isProcessing}
102            />
103            
104            {audioData && (
105              <Card className="mt-4 bg-slate-800/50 border-slate-700">
106                <CardHeader>
107                  <CardTitle className="text-white">Audio Info</CardTitle>
108                </CardHeader>
109                <CardContent className="space-y-2 text-gray-300">
110                  <p>Duration: {audioData.duration.toFixed(2)}s</p>
111                  <p>Sample Rate: {audioData.sampleRate} Hz</p>
112                  <p>Samples: {audioData.pcmData.length.toLocaleString()}</p>
113                  
114                  <div className="flex gap-2 mt-4">
115                    <Button
116                      onClick={playAudio}
117                      size="sm"
118                      className="bg-amber-500 hover:bg-amber-600 text-black"
119                    >
120                      {isPlaying ? (
121                        <><Pause className="w-4 h-4 mr-1" /> Pause</>
122                      ) : (
123                        <><Play className="w-4 h-4 mr-1" /> Play</>
124                      )}
125                    </Button>
126                    
127                    <Button
128                      onClick={downloadResults}
129                      size="sm"
130                      variant="outline"
131                      className="border-amber-400 text-amber-400"
132                    >
133                      <Download className="w-4 h-4 mr-1" />
134                      Export
135                    </Button>
136                  </div>
137                </CardContent>
138              </Card>
139            )}
140          </div>
141          
142          <div className="lg:col-span-2">
143            {spectrogram && (
144              <Tabs defaultValue="spectrogram" className="w-full">
145                <TabsList className="bg-slate-800">
146                  <TabsTrigger value="spectrogram">Spectrogram</TabsTrigger>
147                  <TabsTrigger value="waveform">Waveform</TabsTrigger>
148                  <TabsTrigger value="features">Features</TabsTrigger>
149                </TabsList>
150                
151                <TabsContent value="spectrogram">
152                  <SpectrogramVisualizer
153                    spectrogram={spectrogram.data}
154                    sampleRate={spectrogram.sampleRate}
155                    hopSize={spectrogram.hopSize}
156                  />
157                </TabsContent>
158                
159                <TabsContent value="waveform">
160                  <WaveformVisualizer audioData={audioData} />
161                </TabsContent>
162                
163                <TabsContent value="features">
164                  <FeatureDisplay features={features} />
165                </TabsContent>
166              </Tabs>
167            )}
168          </div>
169        </div>
170      </div>
171    </div>
172  );
173}

Step 6: Deploy to Cloudflare Pages

Deployment Commands
1# Build the static site
2npm run build
3
4# Install Wrangler CLI
5npm install -g wrangler
6
7# Login to Cloudflare
8wrangler login
9
10# Create Pages project
11wrangler pages project create music-analysis
12
13# Deploy to Cloudflare Pages
14wrangler pages deploy out --project-name=music-analysis
15
16# Set up custom domain (optional)
17wrangler pages deployment create --branch=main --domain=music.yourdomain.com
Cloudflare Configuration

Add these headers in your _headers file:

out/_headers
1/*
2  Cross-Origin-Embedder-Policy: require-corp
3  Cross-Origin-Opener-Policy: same-origin
4  
5/*.wasm
6  Content-Type: application/wasm
7  Cache-Control: public, max-age=31536000, immutable

Performance Optimization

Code Splitting
Dynamic imports ensure WASM modules are only loaded when needed, reducing initial bundle size.
Security
All processing happens client-side. Audio files never leave the user's browser.

Conclusion

You now have a complete, production-ready music analysis application running entirely in the browser. This architecture provides professional-grade audio processing capabilities while maintaining user privacy and eliminating server costs.

Start Building Today

Ready to explore the complete toolkit? Check out our live demos: