Java截圖優化

應爲要用java截圖,而後傳遞給opencv處理,java中截圖是使用下邊的代碼:java

Robot robot = new Robot();
        BufferedImage screenCapture = robot.createScreenCapture(new Rectangle(0, 0, 1920, 1080));

可是在個人電腦的全屏截圖要50ms左右,但當我減小截圖區域後,耗時會有一個明顯的成比例減小,這讓我想要看一下爲何有這樣。api

private synchronized BufferedImage[]
            createCompatibleImage(Rectangle screenRect, boolean isHiDPI) {

        checkScreenCaptureAllowed();

        checkValidRect(screenRect);

        BufferedImage lowResolutionImage;
        BufferedImage highResolutionImage;
        DataBufferInt buffer;
        WritableRaster raster;
        BufferedImage[] imageArray;
        //由於調用截圖api返回的像素爲int[],全部這裏肯定RGB的值分別在那些位
        if (screenCapCM == null) {
            /*
             * Fix for 4285201
             * Create a DirectColorModel equivalent to the default RGB ColorModel,
             * except with no Alpha component.
             */

            screenCapCM = new DirectColorModel(24,
                    /* red mask */ 0x00FF0000,
                    /* green mask */ 0x0000FF00,
                    /* blue mask */ 0x000000FF);
        }

        int[] bandmasks = new int[3];
        bandmasks[0] = screenCapCM.getRedMask();
        bandmasks[1] = screenCapCM.getGreenMask();
        bandmasks[2] = screenCapCM.getBlueMask();

        // 感受是防止重複截圖,等待下一幀吧
        Toolkit.getDefaultToolkit().sync();

        //
        GraphicsConfiguration gc = GraphicsEnvironment
                .getLocalGraphicsEnvironment()
                .getDefaultScreenDevice().
                getDefaultConfiguration();
        gc = SunGraphicsEnvironment.getGraphicsConfigurationAtPoint(
                gc, screenRect.getCenterX(), screenRect.getCenterY());

        AffineTransform tx = gc.getDefaultTransform();
        double uiScaleX = tx.getScaleX();
        double uiScaleY = tx.getScaleY();
        int[] pixels;
        //我電腦這裏 uiScaleX  和 uiScaleY 都是1.x,因此走的else分支
        if (uiScaleX == 1 && uiScaleY == 1) {

            pixels = peer.getRGBPixels(screenRect);
            buffer = new DataBufferInt(pixels, pixels.length);

            bandmasks[0] = screenCapCM.getRedMask();
            bandmasks[1] = screenCapCM.getGreenMask();
            bandmasks[2] = screenCapCM.getBlueMask();

            raster = Raster.createPackedRaster(buffer, screenRect.width,
                    screenRect.height, screenRect.width, bandmasks, null);
            SunWritableRaster.makeTrackable(buffer);

            highResolutionImage = new BufferedImage(screenCapCM, raster,
                    false, null);
            imageArray = new BufferedImage[1];
            imageArray[0] = highResolutionImage;

        } else {
            Rectangle scaledRect;
            if (peer.useAbsoluteCoordinates()) {
                scaledRect = toDeviceSpaceAbs(gc, screenRect.x,
                        screenRect.y, screenRect.width, screenRect.height);
            } else {
                scaledRect = toDeviceSpace(gc, screenRect.x,
                        screenRect.y, screenRect.width, screenRect.height);
            }
            // 這裏調用本地方法截圖,這裏是一個耗時點
            pixels = peer.getRGBPixels(scaledRect);
            //構建解析pixels中數據的對象,
            buffer = new DataBufferInt(pixels, pixels.length);
            raster = Raster.createPackedRaster(buffer, scaledRect.width,
                    scaledRect.height, scaledRect.width, bandmasks, null);
            SunWritableRaster.makeTrackable(buffer);

            highResolutionImage = new BufferedImage(screenCapCM, raster,
                    false, null);


            // 這裏大概意思就是,根據高分辨率圖像,生成低分辨率圖像,可是drawImage方法也挺耗時的
            lowResolutionImage = new BufferedImage(screenRect.width,
                    screenRect.height, highResolutionImage.getType());
            Graphics2D g = lowResolutionImage.createGraphics();
            g.setRenderingHint(RenderingHints.KEY_INTERPOLATION,
                    RenderingHints.VALUE_INTERPOLATION_BILINEAR);
            g.setRenderingHint(RenderingHints.KEY_RENDERING,
                    RenderingHints.VALUE_RENDER_QUALITY);
            g.setRenderingHint(RenderingHints.KEY_ANTIALIASING,
                    RenderingHints.VALUE_ANTIALIAS_ON);
            g.drawImage(highResolutionImage, 0, 0,
                    screenRect.width, screenRect.height,
                    0, 0, scaledRect.width, scaledRect.height, null);
            g.dispose();

            if(!isHiDPI) {
                imageArray = new BufferedImage[1];
                imageArray[0] = lowResolutionImage;
            } else {
                imageArray = new BufferedImage[2];
                imageArray[0] = lowResolutionImage;
                imageArray[1] = highResolutionImage;
            }

        }

        return imageArray;
    }

我把pixels = peer.getRGBPixels(scaledRect);方法單獨拿出來測試,速度是26ms,也就是其餘操做是優化的。
我最開始是查看opencv中是否有用一個int報錯一個像素的rgb值的內容,可是好像並無。因此我就嘗試手動解析pixels,從中提取r,g,b的值。測試

//截圖
        pixels = peer.getRGBPixels(screenRect);
        //解析像素
        int length = screenRect.width * screenRect.height;
        byte[] imgBytes = new byte[length * 3];
        int byteIndex = 0;
        for (int i = 0, pixel = 0; i < length; i++) {
            pixel = pixels[i];
            //  pixel中是按照rgb格式排序,可是opencv默認是bgr格式
            imgBytes[byteIndex++] = (byte) (pixel);
            pixel = pixel >> 8;
            imgBytes[byteIndex++] = (byte) (pixel);
            imgBytes[byteIndex++] = (byte) (pixel >> 8);
        }

經過測試,這裏解析只須要3~4ms,而後把這個byte[]傳遞給mat就行了。總體截一張圖並傳遞給opencv的耗時爲30ms。優化

最後試了下在屏幕畫面持續變化的狀況下,一次截圖並傳給opencv的耗時爲35ms。ui