Getting Frames of Live camera footage as Bitmaps in Android using Camera2 API Kotlin

Photo by Angela Compagnone on Unsplash

Note: we will be using Kotlin in this story for Java click here

There are always some things that we think are difficult to understand but in reality, we are not looking at those things from the right angle.

One such thing for Android developers is to display live camera footage inside their applications and getting frames of live footage one by one and using them for particular purposes like passing them to machine learning models and stuff like that.

So here I will teach you to add that live camera footage in your application in a really simple way.

So firstly create a new Android studio Application using either Kotlin.

Permissions

After that inside the manifest file paste these lines to get Camera permissions because we want to access the camera.

3: Then inside the onCreate method of the Activity class ask for permission dynamically

And paste this method below onCreate

After that, you need to copy three code files to your Android studio project

CameraConnectionFragment: Contain the code related to Camera2 API

AutoFitTextureView: A class extending TextureView which s used to render camera preview

ImageUtils: A utility class that will help us to convert frames of live camera footage to bitmaps.

Layout

Now in the layout folder create a file named camera_fragment.xml and paste this code inside it

Make sure you change com.example.imageclassificationlivefeed with your application package name.

There you can see we are using that AutoFitTextureView instead of TextureView because it can adapt to different aspect ratios.

Then inside the layout file of your activity where you want to display the live camera footage add a FrameLayout which will be replaced with CameraConnectionFragment later. So in our case inside activity_main.xml add paste below code

Showing live camera footage

Now inside our MainActivity.kt file, we need to add the code to replace Framelayout with CameraConnectionFragment.

So just below onCreate method of activity class paste this setFrament method.

Inside this method, you can see that was replacing Framelayout(with id container) with the object of CameraConnectionFragment class.

Now inside the onCreate method and inside onRequestPermissionsResult method just call the setFragment function.

So onCreate method should look like that

And onRequestPermissionsResult will look like that

Now inside MainActivity.kt class implement an Interface ImageReader.OnImageAvailableListener .That interface contains a method onImageAvailable. So add this method aswell.

We are implementing that interface because inside CameraConnectionFragment we are passing it to get Frames of live footage. So for each frame of live camera footage, onImageAvailable method will be called and we can get that frame and use it for a variety of purposes.

So after doing that your main activity will look like that.

And that’s it. Now when you will run your Application Live camera footage will be displayed inside the main activity..

Converting frames into Bitmaps

Now we displayed live camera footage inside our Android application and we are getting frames of that live camera footage. But mostly we deal with images in Bitmap format. So let’s convert those frames to bitmap.

So replace onImageAvailable method with this code

So now when a particular frame onImageAvailable method is called we will check if the isProcessingFrame variable is true or not. If that variable is true its means that processing of the previous frame is not completed yet so we will return from there. Otherwise, we will process that particular frame.

So to convert that frame into a bitmap we will firstly get planes of that frame then we will get the bytes. Then finally we will convert those bytes into Bitmap format. So you can see the code related to that in processImage method.

So now we have the frame from live camera footage in Bitmap format stored in a variable name rgbFrameBitmap. So you can perform any operation on that bitmap like you can pass it to machine learning model or perform other operations. But once you will finish the processing of that particular frame you need to call postInferenceCallback.run() method so that the isProcessingFrame variable will be set to false and the next frame will be passed for processing.

So above code will not just give you frames of live camera footage but you will get frames one by one. So once the processing of one frame will be completed the next frame will be passed for processing.

YOU CAN GET THE COMPLETE CODE HERE

Train your own Image Recognition models and build real-time android applications with our “Image Recognition in Android One hour Bootcamp

To Learn Android Machine Learning check out our Android Machine Learning with TensorFlow lite in Java/Kotlin

Udemy Instructor, Helping people Integrate Machine Learning in Android & IOS . Visit my courses https://www.udemy.com/user/e1c14fb5-1c9b-45ef-a479-bbc543e33254/

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store