Do you know? According to Forbes Advisor, 65% of consumers trust businesses that use AI (Artificial Intelligence) technology. No wonder, many companies are starting to implement AI in their products.
Not only for products related to daily life, but also for products that are usually used for work. Call it Android Studio with its Studio Bot, Google Workspace with its Duet AI, and Photoshop with its Generative Fill. It cannot be denied that the use of AI really helps us to complete work much more quickly. Agree or strongly agree? So, the question is "Are we just going to be users and tasters of these new tools?"
The author's hope is of course that we will not only be spectators, but also developers who can utilize AI. At least starting from a simple machine learning level, such as image classification, text recognition, or object detection.
Interestingly, for those of you who don't have a deep understanding of machine learning, you can also apply this technology.
How to implement AI in Android app?
Currently, there are many tools that you can use to implement machine learning in Android applications. Starting from simple, complex, to flexible. So, are you starting to be curious about the various ways you can do it? Come on, let's discuss it!
1. ML Kit
ML Kit is a machine learning framework developed by Google for Android and iOS devices. It provides a collection of APIs that provide various machine learning features, such as image recognition and natural language processing (NLP).
This is the easiest way to add machine learning features to your Android application. This is because you don't need to understand how to create your own model to implement machine learning, everything is already available in the framework. By using ML Kit, you probably never imagined that implementing ML could be this easy.
2. TensorFlow Lite
In ML Kit, the model is provided built-in. The model referred to here is a machine learning algorithm that has previously carried out a training process with certain training data so that it is ready to be used to make predictions on new data.
Then, what if you want to use a separate model? One answer is to use TensorFlow Lite. As the name suggests, TensorFlow Lite is a lightweight and efficient version of the Tensorflow framework which is often used by ML developers to develop and deploy models.
It is designed in such a way that it allows us to run the model on devices with limited resources, such as mobile phones and embedded systems. Additionally, it supports various types of machine learning, such as image recognition, audio recognition, and natural language.
This library can be run on mobile devices, such as Android and iOS or IoT devices, such as Raspberry Pi and Arduino. Some examples of applications that use TensorFlow Lite are GMail, Google Assistant, Google Nest, and Shazam. Of course the names are familiar, right?
So, if you already have a TensorFlow Model and want to convert it to a TensorFlow Lite Model with the .tflite extension. As a result, it is smaller, more efficient, and supports the TensorFlow Lite framework.
If you don't have your own model, you can use TensorFlow Lite Model Maker to carry out transfer learning. Transfer learning is an approach to models that have been previously trained to perform certain tasks. This is used as a starting point to create a new model with a similar task. You can also use the TensorFlow Hub to look for references to patented models.
3. MediaPipe
MediaPipe is the newest solution for implementing machine learning which was introduced at Google I/O 2023. This framework can be run on mobile platforms (Android & iOS), IoT, web, and also desktop. It has also been used in many products, such as Google Lens, Google Meet, YouTube, and Google Photos. There's no doubt about it, right?
MediaPipe is built on top of TensorFlow Lite so it has the same capabilities for deploying models, if not more. MediaPipe provides a way to combine multiple models in a pipeline. Apart from that, there is also a MediaPipe Model Maker feature for transfer learning like Tensorflow.
A different and interesting thing offered by MediaPipe is the presence of MediaPipe Studio. On this website, you can directly test each type of machine learning and various settings. Not only that, you can also use the model you have to try there. Interesting right?
4. Firebase ML
Different from previous frameworks, Machine Learning processing in this framework is carried out in the Cloud, not on the device.
The advantage of this method is that you can save space so that the application size can be smaller when downloaded. Apart from that, if you want to update the model, you can change it directly OTA (on-the-air).
The weakness of this method is that the resulting response is slower. Apart from that, he is very dependent on the internet. So, the application cannot be used if there is no internet.
ML or AI Processing on Server
This is an alternative method that you can use too. Usually this method is used for types of processing that are only done once, not continuous streams. The requirement is that there is someone who understands how to implement ML in the cloud and then output the results in JSON form.
That way, you only need to understand how to read and send data via the REST API or what is often called networking, just like when creating applications as usual. You can use various libraries, such as Retrofit, Fuel, FAN, and LoopJ.
Conclusion
OK, those are various ways to implement machine learning on Android devices, especially for those of you who are not Machine Learning Experts. There is something simple with a build-in model, namely the ML Kit. Then TensorFlow Lite is popular and often used to deploy your own models.
Next there is MediaPipe which is a new framework with various new services. The thing mentioned previously is a type of on-device ML service where ML processing is carried out on the device. Apart from that, you can also do ML processing in the cloud, either using the Firebase ML service or via your own server.
So, of these various methods, which method do you want to implement? Please write it in the comments column, OK! If there are many requests, we will present a tutorial in our upcoming blog.