Optical Mark Recognition in javascript for cross plateform mobile app - javascript

I'd like to build an exam correction app. From a paper multiple choice question (mcq) I'd like to use my camera to analyse it and get a score.
This exist in python https://www.pyimagesearch.com/2016/10/03/bubble-sheet-multiple-choice-scanner-and-test-grader-using-omr-python-and-opencv/
Does someone know a lib in javascript for this ?

I am doing the same thing. As I found out, you should not use Javascript due to performance reason.
You should do the following steps:
Write a native module for Android or iOS or C++ for both to handle image
Write a native module to process frames from camera to get appropriate frame and pass it to the image handling module.
Bridge all them to React Native side, handle them like other native libraries
Read more about native module: https://reactnative.dev/docs/0.60/native-modules-android
You may need OpenCV library to handle your images

Related

How to turn my javascript/React Native based app into .ipa for deployment?

I currently have a starter app that is very basic but I am trying to upload it to test flight to test so that I can later upload new versions of my companies app. I currently have your basic react native app using javascript.
I'm going to attempt to upload it to transporter and then upload that to test flight. I'm having trouble converting my folder with the app in it to the required format (.ipa or .pkg). Every tutorial I've seen uses xcode but I am writing my program in vs code with javascript. How should I go about this? I can't find any resources but I may be looking in the wrong places, thanks.
The tutorials aren't using Xcode because it's their favorite IDE, they're using it because you must use Xcode to create any iOS build artifacts.
It's fine to use VS Code for writing code, but everything has to go through Xcode eventually. Make sure you've gone through all the iOS setup.

Is it possible to extract frames from a live camera feed in react-native?

I am trying to send image frames to a tensor flow library on a server. I got it all setup using pure React JS, but implementing it on react-native is giving me a lot of problems.
It seems impossible to get the frames from the webRCT stream, and even without the webRCT I can't find a way to even get frames from any video in react-native.
Has anyone found a good solution to this?
Have you tried a library that can do this for you? Maybe this one can help:
https://github.com/react-native-webrtc/react-native-webrtc
Instead of "pure" ReactJS you need a bit of configuration for both Android & iOS after installing the module.

Can react native used to build my bluetooth apps?

I'v project with my lecture to build android/ios app that control motor through blutooth HC05 with openCM microcontroller.
The app should have capability to get and send data to the motor and save data locally
Can it be done using react native?
I have know a bit of html, css, and JavaScript. I just see react native as a good alternative to build my app than i build it with java or obj c since i dont know both. So before i learn deeper about react n I want to know, can i use react native for my project.
Bluetooth HC05
React Native is probably not a good fit for you if your main goal is to avoid writing native code. You will still have to learn the different Bluetooth APIs and their quirks in addition to writing native modules to expose them to JavaScript.

KISSmetrics - React Native iOS

I am using React native to create an iOS app; So my code is in javascript and some objective-c.
Now i wan't to implement KISSmetrics in my project, i have done the proper setup based on kissmetrics documentation, but when it comes to create events and user identifications etc… i have to use data from my javascript code.
Does anyone knows how to do that? for example:
the objective-c code to identify the user is this: [[KISSmetricsAPI sharedAPI] identify:#"name#email.com"]; but how can i get the code that gets the identity of the user and replace the name#email.com from my javascript code?
I would look here to find out how to build a native module bridge. The way it works is that you create an iOS native module with methods that you can actually call from JavaScript by which you can send your data from JS to Obj-C.
Here's an example project that does this:
https://github.com/idehub/react-native-google-analytics-bridge
You don't need to turn it into a full-fledged NPM library, you can just simply create the necessary native files and JS files on the fly in your project.
Also, if you don't know already, remember to rebuild the iOS project (hit the Play button) to see your changes because the native side doesn't have Live Reloading.

Using HTML, CSS, JS with Python to make desktop application?

I recently made a program. It's developed using Node.JS and Electron to make it a desktop application. Unfortunately, Electron is quite big in base file size and I'd like to reduce the file size. I've looked at my app files before adding electron and it's around 38mb. When adding electron it's roughly over 100mb more than the original.
I've been looking into converting the program to Python to hopefully reduce the size of it. Though I only know the basics of Python such as how to declare variables and functions. I've seen stuff like Tkinter and stuff, but would I be able to use HTML, CSS, JS to make the UI of the program and use Python as the back bone(i.e. using materailizecss framework for the ui).
If so, how could I do this? Also, to make it clear, I don't want a web app, I'm looking for a desktop application.
YES. You can use QT standard library but if you persist on writing UI yourself there is an HTMLpy Library which can find here HTMLPY
htmlPy is a wrapper around PySide's QtWebKit library. It helps with creating beautiful GUIs using HTML5, CSS3 and Javascript for standalone Python applications.
go through it and you will find interesting things

Categories