Getting Frames of Live camera footage as Bitmaps in Android using Camera2 API

Hamza Asif
7 min readMar 5, 2021
Photo by Angela Compagnone on Unsplash

Note: we will be using java in this story for kotlin click here. You can also watch video lecture of story here on youtube

Learn the use of machine learning and computer vision in Android, Flutter & React Native with our Mobile Machine Learning courses. You can avail discount on following Android Machine learning courses

Face Recognition in Android — Build Attendance Systems

Train Object Detection Models & build Android Applications

ChatGPT & Android — Build Chatbots & Smart Apps for Android

Android & Regression: Train Prediction ML models for Android

There are always some things that we think are difficult to understand but in reality, we are not looking at those things from the right angle.

One such thing for Android developers is to display live camera footage inside their applications and getting frames of live footage one by one and using them for particular purposes like passing them to machine learning models and stuff like that.

So here I will teach you to add that live camera footage in your application in a really simple way.

So firstly create a new Android studio Application using either Java.

Permissions

After that inside the manifest file paste these lines to get Camera permissions because we want to access the camera.

<uses-permission android:name="android.permission.CAMERA" />
<uses-feature android:name="android.hardware.camera" />
<uses-feature android:name="android.hardware.camera.autofocus" />

3: Then inside the onCreate method of the Activity class ask for permission dynamically

//TODO ask for camera permissions
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {
if (checkSelfPermission(Manifest.permission.CAMERA) == PackageManager.PERMISSION_DENIED ) {
ActivityCompat.requestPermissions(this, new String[]{
Manifest.permission.CAMERA}, 121);
}else{
//TODO show live camera footage
}
} else {
//TODO show live camera footage

}

And paste this method below onCreate

@Override
public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults);
//TODO show live camera footage
if (grantResults.length > 0 && grantResults[0] == PackageManager.PERMISSION_GRANTED) {
//TODO show live camera footage

} else {

}
}

After that, you need to copy three code files to your Android studio project

CameraConnectionFragment: Contain the code related to Camera2 API

AutoFitTextureView: A class extending TextureView which s used to render camera preview

ImageUtils: A utility class that will help us to convert frames of live camera footage to bitmaps.

Layout

Now in the layout folder create a file named camera_fragment.xml and paste this code inside it

<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent">

<com.example.imageclassificationlivefeed.AutoFitTextureView
android:id="@+id/texture"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentTop="true" />


</RelativeLayout>

Make sure you change com.example.imageclassificationlivefeed with your application package name.

There you can see we are using that AutoFitTextureView instead of TextureView because it can adapt to different aspect ratios.

Then inside the layout file of your activity where you want to display the live camera footage add a FrameLayout which will be replaced with CameraConnectionFragment later. So in our case inside activity_main.xml add paste below code

<FrameLayout
android:id="@+id/container"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:background="@android:color/black"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintHorizontal_bias="0.0"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent"
>

</FrameLayout>

Showing live camera footage

Now inside our MainActivity.java file, we need to add the code to replace Framelayout with CameraConnectionFragment.

So just below onCreate method of activity class paste this setFrament method.

//TODO fragment which show llive footage from camera
int previewHeight = 0,previewWidth = 0;
int sensorOrientation;
@RequiresApi(api = Build.VERSION_CODES.LOLLIPOP)
protected void setFragment() {
final CameraManager manager = (CameraManager) getSystemService(Context.CAMERA_SERVICE);
String cameraId = null;
try {
cameraId = manager.getCameraIdList()[0];
} catch (CameraAccessException e) {
e.printStackTrace();
}
CameraConnectionFragment fragment;
CameraConnectionFragment camera2Fragment =
CameraConnectionFragment.newInstance(
new CameraConnectionFragment.ConnectionCallback() {
@Override
public void onPreviewSizeChosen(final Size size, final int rotation) {
previewHeight = size.getHeight();
previewWidth = size.getWidth();
Log.d("tryOrientation","rotation: "+rotation+" orientation: "+getScreenOrientation()+" "+previewWidth+" "+previewHeight);
sensorOrientation = rotation - getScreenOrientation();
}
},
this,
R.layout.camera_fragment,
new Size(640, 480));

camera2Fragment.setCamera(cameraId);
fragment = camera2Fragment;
getFragmentManager().beginTransaction().replace(R.id.container, fragment).commit();
}
protected int getScreenOrientation() {
switch (getWindowManager().getDefaultDisplay().getRotation()) {
case Surface.ROTATION_270:
return 270;
case Surface.ROTATION_180:
return 180;
case Surface.ROTATION_90:
return 90;
default:
return 0;
}
}

Inside this method, you can see that was replacing Framelayout(with id container) with the object of CameraConnectionFragment class.

Now inside the onCreate method and inside onRequestPermissionsResult method just call the setFragment function.

So onCreate method should look like that

//TODO ask for camera permissions
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {
if (checkSelfPermission(android.Manifest.permission.CAMERA) == PackageManager.PERMISSION_DENIED ) {
ActivityCompat.requestPermissions(this, new String[]{
android.Manifest.permission.CAMERA}, 121);
}else{
//TODO show live camera footage
setFragment();
}
} else {
//TODO show live camera footage
setFragment();
}

And onRequestPermissionsResult will look like that

@Override
public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults);
//TODO show live camera footage
if (grantResults.length > 0
&& grantResults[0] == PackageManager.PERMISSION_GRANTED) {
setFragment();
}
}

Now inside MainActivity.java class implement an Interface ImageReader.OnImageAvailableListener .That interface contains a method onImageAvailable. So add this method aswell.

@Override
public void onImageAvailable(ImageReader reader) {
final Image image = reader.acquireLatestImage();
image.close();
}

We are implementing that interface because inside CameraConnectionFragment we are passing it to get Frames of live footage. So for each frame of live camera footage, onImageAvailable method will be called and we can get that frame and use it for a variety of purposes.

So after doing that your main activity will look like that.

import androidx.annotation.NonNull;
import androidx.appcompat.app.AppCompatActivity;
import androidx.core.app.ActivityCompat;

import android.Manifest;
import android.app.Fragment;
import android.content.Context;
import android.content.pm.PackageManager;
import android.hardware.camera2.CameraAccessException;
import android.hardware.camera2.CameraManager;
import android.media.Image;
import android.media.ImageReader;
import android.os.Build;
import android.os.Bundle;
import android.os.Handler;

import android.util.Size;
import android.view.Surface;



public class MainActivity extends AppCompatActivity implements ImageReader.OnImageAvailableListener {
Handler handler;
private int sensorOrientation;

@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
handler = new Handler();

//TODO ask for camera permissions
if
(Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {
if(checkSelfPermission(android.Manifest.permission.CAMERA) == PackageManager.PERMISSION_DENIED ) {
ActivityCompat.requestPermissions(this, new String[]{
android.Manifest.permission.CAMERA}, 121);
}else{
//TODO show live camera footage
setFragment();
}
} else {
//TODO show live camera footage
setFragment();
}
}

@Override
public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults);
//TODO show live camera footage
if (grantResults.length > 0
&& grantResults[0] == PackageManager.PERMISSION_GRANTED) {
setFragment();
} else {

}
}

//TODO fragment which show llive footage from camera
int previewHeight = 0, previewWidth = 0;

protected void setFragment() {
final CameraManager manager = (CameraManager) getSystemService(Context.CAMERA_SERVICE);
String cameraId = null;
try {
cameraId = manager.getCameraIdList()[0];
} catch (CameraAccessException e) {
e.printStackTrace();
}

Fragment fragment;
CameraConnectionFragment camera2Fragment =
CameraConnectionFragment.newInstance(
new CameraConnectionFragment.ConnectionCallback() {
@Override
public void onPreviewSizeChosen(final Size size, final int rotation) {
previewHeight = size.getHeight();
previewWidth = size.getWidth();
sensorOrientation = rotation - getScreenOrientation();
}
},
this,
R.layout.camera_fragment,
new Size(640, 480));

camera2Fragment.setCamera(cameraId);
fragment = camera2Fragment;

getFragmentManager().beginTransaction().replace(R.id.container, fragment).commit();
}

protected int getScreenOrientation() {
switch (getWindowManager().getDefaultDisplay().getRotation()) {
case Surface.ROTATION_270:
return 270;
case Surface.ROTATION_180:
return 180;
case Surface.ROTATION_90:
return 90;
default:
return 0;
}
}


//TODO getting frames of live camera footage and passing them to model

@Override
public void onImageAvailable(ImageReader reader) {
final Image image = reader.acquireLatestImage();
image.close();
}
}

And that’s it. Now when you will run your Application Live camera footage will be displayed inside the main activity..

Converting frames into Bitmaps

Now we displayed live camera footage inside our Android application and we are getting frames of that live camera footage. But mostly we deal with images in Bitmap format. So let’s convert those frames to bitmap.

So replace onImageAvailable method with this code

//TODO getting frames of live camera footage and passing them to model
private boolean isProcessingFrame = false;
private byte[][] yuvBytes = new byte[3][];
private int[] rgbBytes = null;
private int yRowStride;
private Runnable postInferenceCallback;
private Runnable imageConverter;
private Bitmap rgbFrameBitmap;
@Override
public void onImageAvailable(ImageReader reader) {
// We need wait until we have some size from onPreviewSizeChosen
if (previewWidth == 0 || previewHeight == 0) {
return;
}
if (rgbBytes == null) {
rgbBytes = new int[previewWidth * previewHeight];
}
try {
final Image image = reader.acquireLatestImage();

if (image == null) {
return;
}

if (isProcessingFrame) {
image.close();
return;
}
isProcessingFrame = true;
final Image.Plane[] planes = image.getPlanes();
fillBytes(planes, yuvBytes);
yRowStride = planes[0].getRowStride();
final int uvRowStride = planes[1].getRowStride();
final int uvPixelStride = planes[1].getPixelStride();

imageConverter =
new Runnable() {
@Override
public void run() {
ImageUtils.convertYUV420ToARGB8888(
yuvBytes[0],
yuvBytes[1],
yuvBytes[2],
previewWidth,
previewHeight,
yRowStride,
uvRowStride,
uvPixelStride,
rgbBytes);
}
};

postInferenceCallback =
new Runnable() {
@Override
public void run() {
image.close();
isProcessingFrame = false;
}
};

processImage();

} catch (final Exception e) {
Log.d("tryError",e.getMessage());
return;
}

}

private void processImage() {
imageConverter.run();
rgbFrameBitmap = Bitmap.createBitmap(previewWidth, previewHeight, Bitmap.Config.ARGB_8888);
rgbFrameBitmap.setPixels(rgbBytes, 0, previewWidth, 0, 0, previewWidth, previewHeight);
//Do your work here
postInferenceCallback.run();
}

protected void fillBytes(final Image.Plane[] planes, final byte[][] yuvBytes) {
// Because of the variable row stride it's not possible to know in
// advance the actual necessary dimensions of the yuv planes.
for (int i = 0; i < planes.length; ++i) {
final ByteBuffer buffer = planes[i].getBuffer();
if (yuvBytes[i] == null) {
yuvBytes[i] = new byte[buffer.capacity()];
}
buffer.get(yuvBytes[i]);
}
}

So now when a particular frame onImageAvailable method is called we will check if the isProcessingFrame variable is true or not. If that variable is true its means that processing of the previous frame is not completed yet so we will return from there. Otherwise, we will process that particular frame.

So to convert that frame into a bitmap we will firstly get planes of that frame then we will get the bytes. Then finally we will convert those bytes into Bitmap format. So you can see the code related to that in processImage method.

So now we have the frame from live camera footage in Bitmap format stored in a variable name rgbFrameBitmap. So you can perform any operation on that bitmap like you can pass it to machine learning model or perform other operations. But once you will finish the processing of that particular frame you need to call postInferenceCallback.run() method so that the isProcessingFrame variable will be set to false and the next frame will be passed for processing.

So above code will not just give you frames of live camera footage but you will get frames one by one. So once the processing of one frame will be completed the next frame will be passed for processing.

Mobile Machine Learning

Learn the use of machine learning and computer vision in Android, Flutter & React Native with our Mobile Machine Learning courses. You can avail discount on the following Mobile Machine learning courses

Android Machine Learning Courses

Face Recognition in Android — Build Attendance Systems

Train Object Detection Models & build Android Applications

ChatGPT & Android — Build Chatbots & Smart Apps for Android

Android Machine Learning with TensorFlow lite in Java/Kotlin

Android & Regression: Train Prediction ML models for Android

Flutter Machine Learning Courses

Machine Learning for Flutter The Complete 2023 Guide

Face Recognition and Detection in Flutter — The 2024 Guide

Flutter and Linear Regression: Build Prediction Apps Flutter

ChatGPT & Flutter: Build Chatbots & Assistants in Flutter

Train Object Detection and classification models for Flutter

React Native Courses

ChatGPT & React Native — Build Chatbots for Android & IOS

Connect With Me

My Courses: https://www.udemy.com/user/e1c14fb5-1c9b-45ef-a479-bbc543e33254/

My Facebook: https://www.facebook.com/MobileMachineLearning

Youtube Channel: https://www.youtube.com/channel/UCuM6FHbMdYXQCR8syEtnM9Q

--

--

Hamza Asif

Udemy Instructor, Helping people Integrate Machine Learning in Android & IOS . Visit my courses https://www.udemy.com/user/e1c14fb5-1c9b-45ef-a479-bbc543e33254/