標籤

2012年5月9日 星期三

[長期置頂]Source

Getting a simple project started requires synthesizing a large and disparate set of concepts, documentation, mailing lists, and samples.
http://forum7.hkgolden.com/view.aspx?message=2957805&page=4

Audio Unit set up steps:
Set properties:
1.Find your Audio Components
2.Set your component properties(Audio Format,Channels...etc)

When finished, deallocate the audio unit.Connection:
1.Directly connect to other Audio Units or to Callback functions
2.Write your OWN Callback Functions to fill audio buffers in chunk or sample-by-sample

Initialization:
1.Initialize and Play


Audio Unit Setup
Some substantial code is required to create audio units and set up the audio processing graph. These are the steps that MixerHostAudio uses for initialization:
  1. Set up an active audio session.
  2. Obtain files,set up their formats,and read into memory.
  3. Create an audio processing graph.
  4. Create and initialize Multi-channel Mixer and RemoteI/O audio
    units.
  5. Add the audio units with the nodes to the graph and connect them.
  6. Start processing audio with the graph.
The details of the audio unit setup are covered in the Audio Unit Initialization section of this document. 



constructing an audio processing graph entails three tasks:
  1. Adding nodes to a graph
  2. Directly configuring the audio units represented by the nodes
  3. Interconnecting the nodes


No matter which design pattern you choose, the steps for constructing an audio unit hosting app are basically the same:
  1. Configure your audio session.
  2. Specify audio units.
  3. Create an audio processing graph, then obtain the audio units.
  4. Configure the audio units.
  5. Connect the audio unit nodes.
  6. Provide a user interface.
  7. Initialize and then start the audio processing graph.



The correct way to derive ASBD field values given three factors:
  • Whether the stream is for I/O (SetCanonical) or for audio processing (SetAUCanonical)
  • How many channels you want the stream format to represent
  • Whether you want the stream format interleaved or non-interleaved


An audio unit’s life cycle proceeds as follows:
-At runtime, obtain a reference to the dynamically-linkable library that defines an audio unit you want to use.
-Instantiate the audio unit.
-Configure the audio unit as required for its type and to accomodate the intent of your app.
-Initialize the audio unit to prepare it to handle audio.
-Start audio flow.
-Control the audio unit.




Audio:(esp Audio Unit)
http://www.cocoabuilder.com/archive/cocoa/301070-ios-play-streaming-mp3-audio-with-effects.html

http://cocoawithlove.com/2008/09/streaming-and-playing-live-mp3-stream.html

http://stackoverflow.com/questions/5320293/custom-eq-audiounit-on-ios

iOS: Audio Units vs OpenAL vs Core Audio
http://stackoverflow.com/questions/4014614/ios-audio-units-vs-openal-vs-core-audio

Simple and Graphical Introduction to Audio Unit and AUGraph
http://www.dribin.org/dave/resources/files/2010/ipdcchi_Dribin_AudioUnits.pdf




Programming Audio on the iPhone:(basic contents)


Speaker: Robert Strojan - Blackout Labs(basic concept)
http://www.blackoutlabs.com/iPDC/iOSAudioLandscapeiPDC.pdf

core-audio-dont-be-afraid-to-play-it-loud:(basic concept)

Audio Unit Basic Getting Start:(basic concept)
http://timbolstad.com/2010/03/14/core-audio-getting-started/


iZotope_iOS_Audio_Programming_Guide(open source)(basic concept)
*Good explanation for apple sample code - MixerHost and even auriotouch, recommend to read it intensively.


Play Music using AudioUnit (basic)
AudioUnit playback - Introtuction of CoreAudio Programming
http://seruziu.blogspot.com/2010/06/play-music-using-audiounit_29.html




Learning Core Audio: A Hands-On Guide to Audio Programming for Mac and iOS(source code only:)
http://www.informit.com/store/product.aspx?isbn=0321636848
少量試讀:
http://my.safaribooksonline.com/book/audio/9780321636973


[Time code]; blog (with source code:)
http://www.subfurther.com/blog/
It provides a stong foundation on the things you need to do when programming Audio Units. He includes 5 great examples that start out very basic and evolve to fairly involved.
http://www.subfurther.com/blog/2010/04/30/what-you-missed-at-360idev/

Aran Mullholland’s blog (with source code:)

Mike Tyson’s blog : Using RemoteIO audio unit (advance)


audiograph (advance)
audiograph source code:
https://github.com/tkzic/audiograph





Audio Units Apple Source:
The Audio Unit Hosting Guide for iOS from the iOS Developer Library:
http://developer.apple.com/library/ios/#documentation/MusicAudio/Conceptual/AudioUnitHostingGuide_iOS/Introduction/Introduction.html

Using Audio:
Core Audio Overview:

The MixerHost sample application:
http://developer.apple.com/library/ios/#samplecode/MixerHost/Introduction/Intro.html
The aurioTouch sample application: http://developer.apple.com/library/ios/#samplecode/aurioTouch/Introduction/Intro.html
The iPhoneMixerEQGraphTest sample application: http://developer.apple.com/library/ios/#samplecode/iPhoneMixerEQGraphTest/ Introduction/Intro.html

如何在应用程序中使用音频设备

到目前为止,关于这个问题的最佳文档是苹果的音频设备承载指南为 iOS 开发库中。对于更一般的介绍,您可以签出核心音频概述
我还发现的MixerHostiPhoneMultichannelMixerTest的示例代码中开始使用音频设备非常有用。
最后,我发现类引用和像音频设备加工图服务参考音频设备组件服务参考服务引用对探索类、 常量、 特定方法的功能很有用,等等。
预设将演示如何使用取样器:
http://developer.apple.com/library/ios/#samplecode/LoadPresetDemo/Introduction/Intro.html#//apple_ref/doc/uid/DTS40011214

如果要加载不同的声音格式播放这篇文章是很有帮助:
https://developer.apple.com/library/mac/#technotes/tn2283/_index.html






MusicDSP:包含很多聲效的SOURCE CODE:
http://www.musicdsp.org/index.php

CADebugPrintf.h
http://svn.perian.org/trunk/CoreAudio/PublicUtility/


http://disanji.net/2011/02/14/ios-tone-generator-introduction-to-html/

http://timbolstad.com/2010/03/14/core-audio-getting-started/

http://www.360doc.com/content/10/0117/12/390124_13792343.shtml

http://itunes.apple.com/us/app/pocket-rta-spectrum-analyser/id317080174?mt=8


THE SCIENTIST & ENGINEER'S GUIDE TO DIGITAL SIGNAL PROCESSING


http://www.analog.com/en/content/scientist_engineers_guide/fca.html




mp3 decoding:

Freqency and Spectrum Analysis


Filter:
http://www.falstad.com/dfilter/directions.html

On Line DSP BOOK:
http://www.dspguide.com/ch12/2.htm

MPEG1 Layer3 (MP3)解碼算法原理詳解:
http://mp4tech.net/document/audiocomp/0000298.asp
(source: http://mp4tech.net/document/audiocomp/mp3_special.asp)


Exact Audio Copy 0.99beta4 正體中文免安裝

http://smartpg.pixnet.net/blog/post/20225391-exact-audio-copy-0.99beta4-%E6%AD%A3%E9%AB%94%E4%B8%AD%E6%96%87%E5%85%8D%E5%AE%89%E8%A3%9D

www.mp3-tech.org 
http://www.mp3-tech.org/

OPENGL ES:

西蒙源码下载:https://github.com/mauriceatron


一個適合初學iPhone OpenGL ES的迷你教程
http://www.iphone.org.hk/cgi-bin/ch/topic_show.cgi?id=10972

OpenGL ES from the Ground Up: Table of Contents
http://iphonedevelopment.blogspot.com/2009/05/opengl-es-from-ground-up-table-of.html


OpenGL ES for iPhone : A Simple Tutorial Part 1



iPhone 3D Programming
Developing Graphical Applications with OpenGL ES
http://ofps.oreilly.com/titles/9780596804824/


從零開始學習OpenGL ES


http://www.hksilicon.com/kb/articles/17919/OpenGL-ES-


EAGLView.h
http://book.51cto.com/art/201108/285446.htm


Quartz2D and OpenGL ES:

iphone开发基础内容-常用(一)-QuartzCore


QuartzCore 绘图介绍:
http://my.oschina.net/ahuaahua/blog/29122


Quartz 2D Programming Guide:


https://developer.apple.com/library/ios/#documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_affine/dq_affine.html


OpenGL簡介 + 6個IPhone 官方範例程式



iphone上如何绘制柱状图

http://d2100.com/post/4

DSP:
http://www.scribd.com/doc/53108402/179/vDSP-destroy-fftsetup#page=5


The Scientist and Engineers Guide to Digital Signal Processing (by Stephen W. Smith)(FREE DSP ONLINE BOOK)


Spectrum Analysis:
Spectrum Analysis Graph implementation:
BeatDetektor - iPhone app and Open Source algorithm for BPM detection
http://www.gearslutz.com/board/product-alerts-older-than-2-months/457617-beatdetektor-iphone-app-open-source-algorithm-bpm-detection.html




音乐的频谱分析:
http://www.cocoachina.com/ask/questions/show/2821/%EF%BB%BF关于音乐播放器上随节奏跳动的光柱效果。。也可以说音乐的频谱分析。。求资料。。求教育
于音乐播放器上随节奏跳动的光柱效果。。也可以说音乐的频谱分析。
看一下fmod吧:http://www.fmod.org/ 利用fmod自己用timer把数据取出来之后画就是了。(open source)
http://www.fmod.org/fmod-downloads.html
streaming_bank.fsb
http://trac.assembla.com/p400roguelike/browser/P400Roguelike/FMOD%20Programmers%20API%20Win32/fmoddesignerapi/examples/media/streaming_bank.fsb?rev=1


音頻解碼 輕鬆盜取電話理財密碼
http://hk.news.yahoo.com/video/topstory-19458512/title-28847773.html




FFT:

Opensource FFT for the iPhone

http://oscopeapp.com/opensource-fft-for-the-iphone


傅立葉轉換演算法專論 - for dragon
http://www.programmer-club.com/showSameTitleN/general/3041.html


Equalizer:
8 Easy Steps To Better EQ
http://audio.tutsplus.com/tutorials/mixing-mastering/8-easy-steps-to-better-eq/


Pitch Shifting Using The Fourier Transform (by Stephan M. Bernsee)
This article and accompanying source code unveil the mystery of programming in the frequency domain.
http://www.dspdimension.com/admin/pitch-shifting-using-the-ft/ 



Other:
Apple mail lists:
http://lists.apple.com/


VTMAUGraphDemo (by Chris Adamson)
Sample code that demonstrates new core audio features in iOS 5
http://www.subfurther.com/blog/2011/11/16/what-you-missed-at-voices-that-matter-ios-fall-2011/


stackoverflow(core audio)
http://stackoverflow.com/questions/tagged/core-audio

Handling Audio Hardware Route Changes
http://disanji.net/iOS_Doc/#documentation/Audio/Conceptual/AudioSessionProgrammingGuide/HandlingRouteChanges/HandlingRouteChanges.html
A flowchart representation of how Core Audio, and your property listener callback function, interact to provide good user experience upon an audio hardware route change.





DSP晶片概論
http://alliance.cust.edu.tw/portfolio/myPortfolio?path=Christiano_David&page=28212

what is  audio rendering?
Just mean mixing down audio:
to combine several tracks of audio into a single file when doing multi-track recording.

The term “render” is more often used these days in many audio software programs.  But the terms mean essentially the same thing.
http://www.homebrewaudio.com/8575/what-does-it-mean-to-mix-down-audio/
http://www.homebrewaudio.com/9635/mixing-for-loudness/
http://tutorials.renoise.com/wiki/Render_Song_to_Audio_File
本BLOG另一發文的解釋
http://gian3211.blogspot.com/2012/04/render.html

有D網好似解RENDERING為播放
http://msdn.microsoft.com/en-us/library/windows/hardware/ff536203(v=vs.85).aspx
p.s. 在英語文化中誕生的概念, 找不到合適的意譯, 是經常發生的事

不過如果跟據APPLE的AUDIOUNIT FRAMEWORK﹣ AUCOMPONENT.H的AUDIOUNITRENDER 



extern OSStatus

AudioUnitRender(AudioUnit inUnit,

AudioUnitRenderActionFlags *ioActionFlags,

const AudioTimeStamp *inTimeStamp,
UInt32 inOutputBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
__OSX_AVAILABLE_STARTING(__MAC_10_2,__IPHONE_2_0);


@function
AudioUnitRender

@abstract
the render operation where ioData will contain the results of the audio unit's render operations
從中可以看出RENDER是一個加工過程(described by 
inNumberOfFrames),將一些設定和元素加到IODATA這個組件中

@discussion
an audio unit will render the amount of audio data described by 
inNumberOfFrames and the results of that render will be contained within 
ioData. The caller should provide audio time stamps where at least the sample 
time is valid and it is incrementing sequentially from its previous call 
(so, the next time stamp will be the current time stamp + inNumberFrames) 
If the sample time is not incrementing sequentially, the audio unit will infer
that there is some discontinuity with the timeline it is rendering for
The caller must provide a valid ioData AudioBufferList that matches the 
expected topology for the current audio format for the given bus. The buffer 
list can be of two variants:
(1) If the mData pointers are non-null then the audio unit will render its 
output into those buffers. These buffers should be aligned to 16 byte 
boundaries (which is normally what malloc will return).

(2) If the mData pointers are null, then the audio unit can provide pointers 
to its own buffers. In this case the audio unit is required to keep those
buffers valid for the duration of the calling thread's I/O cycle