WebRTC實時通訊系列教程4 從攝像頭獲取視頻流

【轉載請註明出處: http://blog.csdn.net/leytton/article/details/76704342
css

 PS:若是本文對您有幫助,請點個贊讓我知道哦~微笑html

《WebRTC實時通訊系列教程》翻譯自《Real time communication with WebRTC
git

示例代碼下載http://download.csdn.net/detail/leytton/9923708github

WebRTC實時通訊系列教程1 介紹
web

WebRTC實時通訊系列教程2 概述
json

WebRTC實時通訊系列教程3 獲取示例代碼
瀏覽器

WebRTC實時通訊系列教程4 從攝像頭獲取視頻流
服務器

WebRTC實時通訊系列教程5 RTCPeerConnection傳輸視頻
網絡

WebRTC實時通訊系列教程6 使用RTCDataChannel傳輸數據app

WebRTC實時通訊系列教程7 使用Socket.IO搭建信令服務器交換信息

WebRTC實時通訊系列教程8 打通P2P鏈接和信令通訊

WebRTC實時通訊系列教程9 數據通道圖片傳輸

WebRTC實時通訊系列教程10 恭喜完成本系列課程

1、譯文

一、你將學到

在這一節中,你將學會:

  • 從你的網絡攝像頭獲取視頻流.
  • 播放視頻流.
  • 使用CSS和SVG操做視頻.

此節代碼保存在 step-01 文件夾下.

二、少許HTML代碼...

添加一個video 標籤和一個 script 標籤到 work 目錄下的 index.html 文件中:

<!DOCTYPE html> <html> <head> <title>Realtime communication with WebRTC</title> <link rel="stylesheet" href="css/main.css" /> </head> <body> <h1>Realtime communication with WebRTC</h1> <video autoplay></video> <script src="js/main.js"></script> </body> </html>

三、少許JavaScript代碼

添加如下代碼到 js 目錄中的 main.js 文件中:

'use strict'; navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia; var constraints = { audio: false, video: true }; var video = document.querySelector('video'); function successCallback(stream) { window.stream = stream; // stream available to console if (window.URL) { video.src = window.URL.createObjectURL(stream); } else { video.src = stream; } } function errorCallback(error) { console.log('navigator.getUserMedia error: ', error); } navigator.getUserMedia(constraints, successCallback, errorCallback);

全部的JavaScript案例代碼使用 'use strict'; 來避免常見代碼錯誤.

詳情閱讀 ECMAScript 5 Strict Mode, JSON, and More.

四、測試結果

在瀏覽器打開 index.html 你將看到這個 (展現的是你的攝像頭視圖!):


更好的語法形式

你會感受這些代碼看起來有點舊.

咱們如今使用 getUserMedia() 的回調函數來兼容當前瀏覽器.

能夠查看 github.com/webrtc/samples 上的Promise版示例代碼, 使用的是 MediaDevices API 而且能更好地進行錯誤處理. 咱們後面將會使用它.

五、工做原理

getUserMedia() 調用語法爲:

navigator.getUserMedia(constraints, successCallback, errorCallback);

這一接口相對較新, 因此各類瀏覽器在getUserMedia中依然使用的是前綴名,你能夠查看 main.js 文件中的頂部代碼.

 constraints 變量的參數能夠指定獲取哪些媒體資源 — 在下面的代碼中, 只獲取視頻而不獲取音頻:

var constraints = { audio: false, video: true };

若是 getUserMedia() 函數執行成功, 攝像頭視頻流能夠設置爲video標籤的src屬性資源:

function successCallback(stream) { window.stream = stream; // stream available to console if (window.URL) { video.src = window.URL.createObjectURL(stream); } else { video.src = stream; } }

六、拓展

  • 經過 getUserMedia() 獲取到的 stream 對象是全局變量, 因此你能夠從瀏覽器控制檯查看它: 打開控制檯, 輸入 "stream" 並按回車鍵. 
  • 執行 stream.getVideoTracks() 語句會返回什麼?
  • 試着調用 stream.getVideoTracks()[0].stop().
  • 查看 constraints 對象: 改爲 {audio: true, video: true}會如何?
  • video標籤尺寸是多少? 你能夠經過JavaScript獲取視頻天然尺寸而不是展現尺寸嗎? 使用 Chrome Dev Tools 來檢驗.
  • 嘗試使用CSS過濾Video標籤. 例如:
video { -webkit-filter: blur(4px) invert(1) opacity(0.5); }
  • 嘗試使用SVG過濾. 例如:
video { filter: hue-rotate(180deg) saturate(200%); -moz-filter: hue-rotate(180deg) saturate(200%); -webkit-filter: hue-rotate(180deg) saturate(200%); }

七、你學到的

在這節內容中你學習了:

  • 獲取攝像頭視頻流.
  • 設置媒體約束.
  • 修飾視頻標籤.

此節代碼保存在 step-01 文件夾下.

八、提示

九、最佳實踐

  • 確保你的video標籤大小不會超出父容器. 咱們添加了 width 和 max-width 來設置video標籤的首選尺寸和最大尺寸. 瀏覽器將會自動計算其高度:
video { max-width: 100%; width: 320px; }

十、下一節

你已經獲取到視頻了, 但如何傳輸視頻呢? 下一節即將揭曉!



2、原文

摘自https://codelabs.developers.google.com/codelabs/webrtc-web/#3


4Stream video from your webcam

What you'll learn

In this step you'll find out how to:

  • Get a video stream from your webcam.
  • Manipulate stream playback.
  • Use CSS and SVG to manipulate video.

A complete version of this step is in the step-01 folder.

A dash of HTML...

Add a video element and a script element to index.html in your work directory:

<!DOCTYPE html> <html> <head> <title>Realtime communication with WebRTC</title> <link rel="stylesheet" href="css/main.css" /> </head> <body> <h1>Realtime communication with WebRTC</h1> <video autoplay></video> <script src="js/main.js"></script> </body> </html>

...and a pinch of JavaScript

Add the following to main.js in your js folder:

'use strict'; navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia; var constraints = { audio: false, video: true }; var video = document.querySelector('video'); function successCallback(stream) { window.stream = stream; // stream available to console if (window.URL) { video.src = window.URL.createObjectURL(stream); } else { video.src = stream; } } function errorCallback(error) { console.log('navigator.getUserMedia error: ', error); } navigator.getUserMedia(constraints, successCallback, errorCallback);

All the JavaScript examples here use 'use strict'; to avoid common coding gotchas.

Find out more about what that means in ECMAScript 5 Strict Mode, JSON, and More.

Try it out

Open index.html in your browser and you should see something like this (featuring the view from your webcam, of course!):


A better API for gUM

If you think the code looks a little old fashioned, you're right.

We're using the callback version of getUserMedia() for compatibility with current browsers.

Check out the demo at github.com/webrtc/samples to see the Promise-based version, using the MediaDevices APIand better error handling. Much nicer! We'll be using that later.

How it works

getUserMedia() is called like this:

navigator.getUserMedia(constraints, successCallback, errorCallback);

This technology is still relatively new, so browsers are still using prefixed names for getUserMedia. Hence the shim code at the top of main.js!

The constraints argument allows you to specify what media to get — in this example, video and not audio:

var constraints = { audio: false, video: true };

If getUserMedia() is successful, the video stream from the webcam is set as the source of the video element:

function successCallback(stream) { window.stream = stream; // stream available to console if (window.URL) { video.src = window.URL.createObjectURL(stream); } else { video.src = stream; } }

Bonus points

  • The stream object passed to getUserMedia() is in global scope, so you can inspect it from the browser console: open the console, type stream and press Return. (To view the console in Chrome, press Ctrl-Shift-J, or Command-Option-J if you're on a Mac.)
  • What does stream.getVideoTracks() return?
  • Try calling stream.getVideoTracks()[0].stop().
  • Look at the constraints object: what happens when you change it to {audio: true, video: true}?
  • What size is the video element? How can you get the video's natural size from JavaScript, as opposed to display size? Use the Chrome Dev Tools to check.
  • Try adding CSS filters to the video element. For example:
video { -webkit-filter: blur(4px) invert(1) opacity(0.5); }
  • Try adding SVG filters. For example:
video { filter: hue-rotate(180deg) saturate(200%); -moz-filter: hue-rotate(180deg) saturate(200%); -webkit-filter: hue-rotate(180deg) saturate(200%); }

What you learned

In this step you learned how to:

  • Get video from your webcam.
  • Set media constraints.
  • Mess with the video element.

A complete version of this step is in the step-01 folder.

Tips

  • Don't forget the autoplay attribute on the video element. Without that, you'll only see a single frame!
  • There are lots more options for getUserMedia() constraints. Take a look at the demo atwebrtc.github.io/samples/src/content/peerconnection/constraints. As you'll see, there are lots of interesting WebRTC samples on that site.

Best practice

  • Make sure your video element doesn't overflow its container. We've added width and max-width to set a preferred size and a maximum size for the video. The browser will calculate the height automatically:
video { max-width: 100%; width: 320px; }

Next up

You've got video, but how do you stream it? Find out in the next step!

相關文章
相關標籤/搜索