情景:當你的app隱退到後臺,而其餘也有播放能力的app浮如今前臺,這個時候,你可能要暫停你原有app的播放功能,和解除監聽Media Button,把控制權交給前臺的APP。 java
這就須要監聽音頻的焦點。 android
在開始播放以前,請求焦點,使用AudioManager的requestAudioFocus方法。 編程
當你請求音頻焦點,你能夠指定你要監聽的流類型(好比STREAM_MUSIC)和指定你要佔有焦點多久。 數組
固然從編程的角度來看,app獲取焦點,其它app失去焦點,你應該都須要有所反應。 app
示例:請求音頻焦點 框架
AudioManager am = (AudioManager)getSystemService(Context.AUDIO_SERVICE); // Request audio focus for playback int result = am.requestAudioFocus(focusChangeListener, // Use the music stream. AudioManager.STREAM_MUSIC, // Request permanent focus. AudioManager.AUDIOFOCUS_GAIN); if (result == AudioManager.AUDIOFOCUS_REQUEST_GRANTED) { mediaPlayer.start(); }
應對失去焦點的監聽: 異步
private OnAudioFocusChangeListener focusChangeListener = new OnAudioFocusChangeListener() { public void onAudioFocusChange(int focusChange) { AudioManager am = (AudioManager)getSystemService(Context.AUDIO_SERVICE); switch (focusChange) { case (AudioManager.AUDIOFOCUS_LOSS_TRANSIENT_CAN_DUCK) : // Lower the volume while ducking. mediaPlayer.setVolume(0.2f, 0.2f); break; case (AudioManager.AUDIOFOCUS_LOSS_TRANSIENT) : pause(); break; case (AudioManager.AUDIOFOCUS_LOSS) : stop(); ComponentName component = new ComponentName(AudioPlayerActivity.this, MediaControlReceiver.class); am.unregisterMediaButtonEventReceiver(component); break; case (AudioManager.AUDIOFOCUS_GAIN) : // Return the volume to normal and resume if paused. mediaPlayer.setVolume(1f, 1f); mediaPlayer.start(); break; default: break; } } };
放棄音頻焦點: ide
AudioManager am = (AudioManager)getSystemService(Context.AUDIO_SERVICE); am.abandonAudioFocus(focusChangeListener);
當你戴上耳機的時候,你可能須要下降音量或者先暫停播放,如何監聽這種輸出方式的改變呢? ui
答:
this
private class NoisyAudioStreamReceiver extends BroadcastReceiver { @Override public void onReceive(Context context, Intent intent) { if (AudioManager.ACTION_AUDIO_BECOMING_NOISY.equals (intent.getAction())) { pause(); } } }
使用AudioRecord類去錄音。建立一個AudioRecorder,指定資源,頻率,通道配置,音頻編碼,和緩衝區大小。
int bufferSize = AudioRecord.getMinBufferSize(frequency, channelConfiguration, audioEncoding); AudioRecord audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC, frequency, channelConfiguration, audioEncoding, bufferSize);
頻率、音頻編碼、和通道配置會影響錄音的大小和質量。
出去私有的考慮,Android須要RECORD_AUDIO權限:
<uses-permission android:name=」android.permission.RECORD_AUDIO」/>
當AudioRecorder對象被初始化,而後能夠經過startRecording方法去開始異步錄音,使用read方法將原始的音頻數據放入錄音緩衝區:
audioRecord.startRecording(); while (isRecording) { [ ... populate the buffer ... ] int bufferReadResult = audioRecord.read(buffer, 0, bufferSize); }錄下的原始音頻數據後,拿什麼播放呢?
答:使用AudioTrack去播放該類音頻。
錄音的例子:
int frequency = 11025; int channelConfiguration = AudioFormat.CHANNEL_CONFIGURATION_MONO; int audioEncoding = AudioFormat.ENCODING_PCM_16BIT; File file = new File(Environment.getExternalStorageDirectory(), 「raw.pcm」); // Create the new file. try { file.createNewFile(); } catch (IOException e) { Log.d(TAG, 「IO Exception」, e); } try { OutputStream os = new FileOutputStream(file); BufferedOutputStream bos = new BufferedOutputStream(os); DataOutputStream dos = new DataOutputStream(bos); int bufferSize = AudioRecord.getMinBufferSize(frequency, channelConfiguration, audioEncoding); short[] buffer = new short[bufferSize]; // Create a new AudioRecord object to record the audio. AudioRecord audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC, frequency, channelConfiguration, audioEncoding, bufferSize); audioRecord.startRecording(); while (isRecording) { int bufferReadResult = audioRecord.read(buffer, 0, bufferSize); for (int i = 0; i < bufferReadResult; i++) dos.writeShort(buffer[i]); } audioRecord.stop(); dos.close(); } catch (Throwable t) { Log.d(TAG, 「An error occurred during recording」, t); }
AudioTrack audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, frequency, channelConfiguration, audioEncoding, audioLength, AudioTrack.MODE_STREAM);
audioTrack.play(); audioTrack.write(audio, 0, audioLength);write方法將原始的音頻數據加入到播放緩衝區中。
通常用來播放短促的聲音,支持多音頻同步播放。
直接看例子:
int maxStreams = 10; SoundPool sp = new SoundPool(maxStreams, AudioManager.STREAM_MUSIC, 0); int track1 = sp.load(this, R.raw.track1, 0); int track2 = sp.load(this, R.raw.track2, 0); int track3 = sp.load(this, R.raw.track3, 0); track1Button.setOnClickListener(new OnClickListener() { public void onClick(View v) { sp.play(track1, 1, 1, 0, -1, 1); } }); track2Button.setOnClickListener(new OnClickListener() { public void onClick(View v) { sp.play(track2, 1, 1, 0, 0, 1); } }); track3Button.setOnClickListener(new OnClickListener() { public void onClick(View v) { sp.play(track3, 1, 1, 0, 0, 0.5f); } }); stopButton.setOnClickListener(new OnClickListener() { public void onClick(View v) { sp.stop(track1); sp.stop(track2); sp.stop(track3); } }); chipmunkButton.setOnClickListener(new OnClickListener() { public void onClick(View v) { sp.setRate(track1, 2f); } });
Android2.2(Api Level 8)引入兩個很是方便的方法,autoPause和autoResume,分別會暫停和運行狀態,全部活躍的音頻流。
若再也不須要這些音頻集合,就能夠soundPool.release();去釋放資源。
使用Intents去拍照:
startActivityForResult( new Intent(MediaStore.ACTION_IMAGE_CAPTURE), TAKE_PICTURE);固然對應的onActivityResult,默認的返回的照片會以縮略圖的形式。
若是想獲取完整大小的圖片,則須要先指定存儲的目標文件,下面例子展現:
// Create an output file. File file = new File(Environment.getExternalStorageDirectory(), 「test.jpg」); Uri outputFileUri = Uri.fromFile(file); // Generate the Intent. Intent intent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE); intent.putExtra(MediaStore.EXTRA_OUTPUT, outputFileUri); // Launch the camera app. startActivityForResult(intent, TAKE_PICTURE);
注意:一旦你以這種方式啓動後,就不會有縮略圖返回了,因此所接收到得Intent將爲null。
下面這個例子的onActivityResult對這兩種狀況作了處理:
@Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { if (requestCode == TAKE_PICTURE) { // Check if the result includes a thumbnail Bitmap if (data != null) { if (data.hasExtra(「data」)) { Bitmap thumbnail = data.getParcelableExtra(「data」); imageView.setImageBitmap(thumbnail); } } else { // If there is no thumbnail image data, the image // will have been stored in the target output URI. // Resize the full image to fit in out image view. int width = imageView.getWidth(); int height = imageView.getHeight(); BitmapFactory.Options factoryOptions = new BitmapFactory.Options(); factoryOptions.inJustDecodeBounds = true; BitmapFactory.decodeFile(outputFileUri.getPath(), factoryOptions); int imageWidth = factoryOptions.outWidth; int imageHeight = factoryOptions.outHeight; // Determine how much to scale down the image int scaleFactor = Math.min(imageWidth/width, imageHeight/height); // Decode the image file into a Bitmap sized to fill the View factoryOptions.inJustDecodeBounds = false; factoryOptions.inSampleSize = scaleFactor; factoryOptions.inPurgeable = true; Bitmap bitmap = BitmapFactory.decodeFile(outputFileUri.getPath(), factoryOptions); imageView.setImageBitmap(bitmap); } } }
首先這個少不了:
<uses-permission android:name=」android.permission.CAMERA」/>獲取Camera經過:
Camera.Parameters parameters = camera.getParameters();
經過此,你能夠找到不少關於照相機的屬性,有些參數是基於平臺版本的。
你能夠得到焦點的長度,還有相對水平和垂直的角度,分別經過getFocalLength和get[Horizontal/Vertical]ViewAngle。
Android 2.3(Api Level 9)引入getFocusDistance方法,你能夠用來估計鏡頭和對象之間的距離,此方法會注入一個浮點數組,包含近、遠、最優焦點距離;
float[] focusDistances = new float[3]; parameters.getFocusDistances(focusDistances); float near = focusDistances[Camera.Parameters.FOCUS_DISTANCE_NEAR_INDEX]; float far = focusDistances[Camera.Parameters.FOCUS_DISTANCE_FAR_INDEX]; float optimal = focusDistances[Camera.Parameters.FOCUS_DISTANCE_OPTIMAL_INDEX];
設置參數的方法,相似於set*,從而修改Parameter對象,修改完以後:
camera.setParameters(parameters);
具體參數細節就不介紹了。
一樣SurfaceView又派上用場了。
看段框架代碼:
public class CameraActivity extends Activity implements SurfaceHolder.Callback { private static final String TAG = 「CameraActivity」; private Camera camera; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); SurfaceView surface = (SurfaceView)findViewById(R.id.surfaceView); SurfaceHolder holder = surface.getHolder(); holder.addCallback(this); holder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS); holder.setFixedSize(400, 300); } public void surfaceCreated(SurfaceHolder holder) { try { camera.setPreviewDisplay(holder); camera.startPreview(); // TODO Draw over the preview if required. } catch (IOException e) { Log.d(TAG, 「IO Exception」, e); } } public void surfaceDestroyed(SurfaceHolder holder) { camera.stopPreview(); } public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) { } @Override protected void onPause() { super.onPause(); camera.release(); } @Override protected void onResume() { super.onResume(); camera = Camera.open(); } }
調用camera的setPreviewCallback方法,傳入一個PreviewCallback的實現,重寫onPreviewFrame方法。
camera.setPreviewCallback(new PreviewCallback() { public void onPreviewFrame(byte[] data, Camera camera) { int quality = 60; Size previewSize = camera.getParameters().getPreviewSize(); YuvImage image = new YuvImage(data, ImageFormat.NV21, previewSize.width, previewSize.height, null); ByteArrayOutputStream outputStream = new ByteArrayOutputStream(); image.compressToJpeg( new Rect(0, 0,previewSize.width, previewSize.height), quality, outputStream); // TODO Do something with the preview image. } });
Android 4.0加入了人臉識別的API這裏就很少說了。
前面這些都配置過了,那麼如何拍照呢?
答:使用camera對象的takePicture方法,傳入一個ShutterCallback和兩個PictureCallback實現(一個爲了RAW,另一個爲了JPEG編碼的圖像)。
例子:框架代碼,拍照和保存JPEG圖像到SD卡:
private void takePicture() { camera.takePicture(shutterCallback, rawCallback, jpegCallback); } ShutterCallback shutterCallback = new ShutterCallback() { public void onShutter() { // TODO Do something when the shutter closes. } }; PictureCallback rawCallback = new PictureCallback() { public void onPictureTaken(byte[] data, Camera camera) { // TODO Do something with the image RAW data. } }; PictureCallback jpegCallback = new PictureCallback() { public void onPictureTaken(byte[] data, Camera camera) { // Save the image JPEG data to the SD card FileOutputStream outStream = null; try { String path = Environment.getExternalStorageDirectory() + 「\test.jpg」; outStream = new FileOutputStream(path); outStream.write(data); outStream.close(); } catch (FileNotFoundException e) { Log.e(TAG, 「File Note Found」, e); } catch (IOException e) { Log.e(TAG, 「IO Exception」, e); } } };