sound on the iphone and buffer , many question

hi , i have a bunch of question in relation with the sound on the iphone

1 is the name of the sound library is openAL ?
2 is it possible to use another one , like maximilian ?
3 if yes , where to learn how to change the library ?

it seems that the only function that can be done on samples are pitch pan volume
i can t play backward the samples , or put effects …

so i start thinking i should try to synthesize sounds , i guessed it was related with buffer , and i am trying to understand iphoneaudiooutputexample now

4 where to learn about buffers and synthesis ?

in the example the sinwave and the noise are programed into the buffer (slice after slice by incrementing in a loop to fill up the buffer)
i was thinking , maybe i could import a really shot wave file , and grab all the slices and then save them in a array to use as a buffer

5 am i totally wrong or does it sounds logical ? is it possible ?

6 where to learn about samplestream ? as i am not sure what it is .

sorry for those noob question , but i really dont know where to start on this topic , and
there will be more question .

thank you all

new question :

if i have a samplerate of 44100 and a buffer size of 512

do i have around 86 buffer per seconds ?

new question

i think the type of samples of a buffer is a float , but from what to waht does it variates ?

0 to 1 ?
0 to 256 ?

on the iphoneAudioOutputExample

  
ofSoundStreamSetup(2,0,this, sampleRate, initialBufferSize, 4);  

what is “this”
it s class is fBaseApp * OFSA

here is the prototype

  
void ofSoundStreamSetup(int nOutputChannels, int nInputChannels, ofBaseApp * OFSA, int sampleRate, int bufferSize, int nBuffers);  

and in the comment what does this mean

// 4 num buffers (latency)

is it the nBuffer from the prototype ?
does it mean that we use only 4 buffer to generate our sound?