Skip to content
This repository was archived by the owner on Nov 1, 2024. It is now read-only.

LabGraph Vision implementation #113

Open
wants to merge 20 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions android/vision/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# Project exclude paths
/.gradle/
/app/build/
/app/build/intermediates/javac/debug/classes/
45 changes: 45 additions & 0 deletions android/vision/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@

# LabGraph Vision Object Detection Android Demo

### Overview

This is an object detection app that continuously detects the objects (bounding boxes, classes, and confidence) in the frames of a video imported by the device gallery, with the option to use a quantized [MobileNetV2](https://storage.cloud.google.com/tf_model_garden/vision/qat/mobilenetv2_ssd_coco/mobilenetv2_ssd_256_uint8.tflite) [EfficientDet Lite 0](https://storage.googleapis.com/mediapipe-tasks/object_detector/efficientdet_lite0_uint8.tflite), or [EfficientDet Lite2](https://storage.googleapis.com/mediapipe-tasks/object_detector/efficientdet_lite2_uint8.tflite) model.

The model files are downloaded by a Gradle script when you build and run the app. You don't need to do any steps to download TFLite models into the project explicitly unless you wish to use your own models. If you do use your own models, place them into the app's *assets* directory.

This application should be run on a physical Android device to take advantage of the gallery, though it will enable you to use an emulator for opening locally stored files.

## Build the demo using Android Studio

### Prerequisites

* The **[Android Studio](https://developer.android.com/studio/index.html)**
IDE. This sample has been tested on Android Studio Flamingo.

* A physical Android device with a minimum OS version of SDK 24 (Android 7.0 -
Nougat) with developer mode enabled. The process of enabling developer mode
may vary by device. You may also use an Android emulator with more limited
functionality.

### Building

* Open Android Studio. From the Welcome screen, select Open an existing
Android Studio project.

* From the Open File or Project window that appears, navigate to and select
the labgraph/android/vision directory. Click OK. You may
be asked if you trust the project. Select Trust.

* If it asks you to do a Gradle Sync, click OK.

* With your Android device connected to your computer and developer mode
enabled, click on the green Run arrow in Android Studio.

### Models used

Downloading, extraction, and placing the models into the *assets* folder is
managed automatically by the **download.gradle** file.

### Results

The results of the detection are logged into the LogCat console under the "Result" field.
106 changes: 106 additions & 0 deletions android/vision/app/build.gradle
Original file line number Diff line number Diff line change
@@ -0,0 +1,106 @@
/*
* Copyright 2022 The TensorFlow Authors. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

apply plugin: 'com.android.application'
apply plugin: 'kotlin-android'
apply plugin: 'kotlin-kapt'
apply plugin: "androidx.navigation.safeargs"
apply plugin: 'de.undercouch.download'

android {
compileSdkVersion 32
defaultConfig {
testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"
applicationId "labgraph_vision.objectdetection"
minSdkVersion 24
targetSdkVersion 32
versionCode 1
versionName "1.0.0"
}

dataBinding {
enabled = true
}

compileOptions {
sourceCompatibility rootProject.ext.java_version
targetCompatibility rootProject.ext.java_version
}

kotlinOptions {
jvmTarget = rootProject.ext.java_version
}

buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
}

buildFeatures {
viewBinding true
}
androidResources {
noCompress 'tflite'
}

}

// import DownloadModels task
project.ext.ASSET_DIR = projectDir.toString() + '/src/main/assets'

// Download default models; if you wish to use your own models then
// place them in the "assets" directory and comment out this line.
apply from:'download_models.gradle'

dependencies {
// Kotlin lang
implementation 'androidx.core:core-ktx:1.6.0'
implementation "org.jetbrains.kotlin:kotlin-stdlib-jdk8:$kotlin_version"
implementation 'org.jetbrains.kotlinx:kotlinx-coroutines-android:1.5.0'

// App compat and UI things
implementation 'androidx.appcompat:appcompat:1.3.1'
implementation 'androidx.lifecycle:lifecycle-runtime-ktx:2.3.1'
implementation 'androidx.constraintlayout:constraintlayout:2.0.4'
implementation 'com.google.android.material:material:1.0.0'
implementation 'androidx.localbroadcastmanager:localbroadcastmanager:1.0.0'
implementation 'androidx.fragment:fragment-ktx:1.5.4'

// Navigation library
def nav_version = "2.3.5"
implementation "androidx.navigation:navigation-fragment-ktx:$nav_version"
implementation "androidx.navigation:navigation-ui-ktx:$nav_version"

// CameraX core library
def camerax_version = '1.1.0'
implementation "androidx.camera:camera-core:$camerax_version"

// CameraX Camera2 extensions
implementation "androidx.camera:camera-camera2:$camerax_version"

// CameraX Lifecycle library
implementation "androidx.camera:camera-lifecycle:$camerax_version"

// CameraX View class
implementation "androidx.camera:camera-view:$camerax_version"

//WindowManager
implementation 'androidx.window:window:1.0.0-alpha09'

implementation 'com.google.mediapipe:tasks-vision:0.10.0'
}
29 changes: 29 additions & 0 deletions android/vision/app/download_models.gradle
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
/*
* Copyright 2023 The TensorFlow Authors. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

task downloadModelFile0(type: Download) {
src 'https://storage.googleapis.com/mediapipe-models/object_detector/efficientdet_lite0/float32/1/efficientdet_lite0.tflite'
dest project.ext.ASSET_DIR + '/efficientdet-lite0.tflite'
overwrite false
}

task downloadModelFile1(type: Download) {
src 'https://storage.googleapis.com/mediapipe-models/object_detector/efficientdet_lite2/float32/1/efficientdet_lite2.tflite'
dest project.ext.ASSET_DIR + '/efficientdet-lite2.tflite'
overwrite false
}

preBuild.dependsOn downloadModelFile0, downloadModelFile1
58 changes: 58 additions & 0 deletions android/vision/app/src/main/AndroidManifest.xml
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
<?xml version="1.0" encoding="utf-8"?>
<!--
~ Copyright 2022 The TensorFlow Authors. All Rights Reserved.
~
~ Licensed under the Apache License, Version 2.0 (the "License");
~ you may not use this file except in compliance with the License.
~ You may obtain a copy of the License at
~
~ http://www.apache.org/licenses/LICENSE-2.0
~
~ Unless required by applicable law or agreed to in writing, software
~ distributed under the License is distributed on an "AS IS" BASIS,
~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
~ See the License for the specific language governing permissions and
~ limitations under the License.
-->
<manifest
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:dist="http://schemas.android.com/apk/distribution"
xmlns:tools="http://schemas.android.com/tools"
package="labgraph_vision.objectdetection">

<!-- Enable instant app support -->
<dist:module dist:instant="true" />

<application
android:icon="@mipmap/ic_launcher"
android:roundIcon="@mipmap/ic_launcher_round"
android:label="@string/app_name"
android:allowBackup="true"
android:taskAffinity=""
tools:ignore="AllowBackup">

<activity
android:name="labgraph_vision.objectdetection.MainActivity"
android:clearTaskOnLaunch="true"
android:theme="@style/AppTheme"
android:exported="true"
android:icon="@mipmap/ic_launcher"
android:rotationAnimation="seamless"
android:resizeableActivity="true"
android:configChanges="orientation|screenLayout|screenSize|smallestScreenSize"
tools:targetApi="O">

<!-- Main app intent filter -->
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>

<!-- Declare notch support -->
<meta-data android:name="android.notch_support" android:value="true"/>

</activity>

</application>

</manifest>
Binary file not shown.
Binary file not shown.
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
/*
* Copyright 2022 The TensorFlow Authors. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

package labgraph_vision.objectdetection

import android.os.Bundle
import androidx.appcompat.app.AppCompatActivity
import androidx.navigation.fragment.NavHostFragment
import androidx.navigation.ui.setupWithNavController
import labgraph_vision.objectdetection.databinding.ActivityMainBinding

/**
* Main entry point into our app. This app follows the single-activity pattern, and all
* functionality is implemented in the form of fragments.
*/
class MainActivity : AppCompatActivity() {

private lateinit var activityMainBinding: ActivityMainBinding

override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
activityMainBinding = ActivityMainBinding.inflate(layoutInflater)
setContentView(activityMainBinding.root)

val navHostFragment =
supportFragmentManager.findFragmentById(R.id.fragment_container) as NavHostFragment
val navController = navHostFragment.navController
activityMainBinding.navigation.setupWithNavController(navController)
activityMainBinding.navigation.setOnNavigationItemReselectedListener {
// ignore the reselection
}
}

override fun onBackPressed() {
finish()
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
/*
* Copyright 2022 The TensorFlow Authors. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package labgraph_vision.objectdetection

import androidx.lifecycle.ViewModel

/**
* This ViewModel is used to store object detector helper settings
*/
class MainViewModel : ViewModel() {
private var _delegate: Int = ObjectDetectorHelper.DELEGATE_CPU
private var _threshold: Float =
ObjectDetectorHelper.THRESHOLD_DEFAULT
private var _maxResults: Int =
ObjectDetectorHelper.MAX_RESULTS_DEFAULT
private var _model: Int = ObjectDetectorHelper.MODEL_EFFICIENTDETV0

val currentDelegate: Int get() = _delegate
val currentThreshold: Float get() = _threshold
val currentMaxResults: Int get() = _maxResults
val currentModel: Int get() = _model

fun setDelegate(delegate: Int) {
_delegate = delegate
}

fun setThreshold(threshold: Float) {
_threshold = threshold
}

fun setMaxResults(maxResults: Int) {
_maxResults = maxResults
}

fun setModel(model: Int) {
_model = model
}
}
Loading