Ausnahme gefangen: SSL certificate problem: certificate is not yet valid ๐Ÿ“Œ Face and hand tracking in the browser with MediaPipe and TensorFlow.js

๐Ÿ  Team IT Security News

TSecurity.de ist eine Online-Plattform, die sich auf die Bereitstellung von Informationen,alle 15 Minuten neuste Nachrichten, Bildungsressourcen und Dienstleistungen rund um das Thema IT-Sicherheit spezialisiert hat.
Ob es sich um aktuelle Nachrichten, Fachartikel, Blogbeitrรคge, Webinare, Tutorials, oder Tipps & Tricks handelt, TSecurity.de bietet seinen Nutzern einen umfassenden รœberblick รผber die wichtigsten Aspekte der IT-Sicherheit in einer sich stรคndig verรคndernden digitalen Welt.

16.12.2023 - TIP: Wer den Cookie Consent Banner akzeptiert, kann z.B. von Englisch nach Deutsch รผbersetzen, erst Englisch auswรคhlen dann wieder Deutsch!

Google Android Playstore Download Button fรผr Team IT Security



๐Ÿ“š Face and hand tracking in the browser with MediaPipe and TensorFlow.js


๐Ÿ’ก Newskategorie: AI Videos
๐Ÿ”— Quelle: blog.tensorflow.org

Posted by Ann Yuan and Andrey Vakunov, Software Engineers at Google

Today weโ€™re excited to release two new packages: facemesh and handpose for tracking key landmarks on faces and hands respectively. This release has been a collaborative effort between the MediaPipe and TensorFlow.js teams within Google Research.

Facemesh package
Handpose package

Try the demos live in your browser

The facemesh package finds facial boundaries and landmarks within an image, and handpose does the same for hands. These packages are small, fast, and run entirely within the browser so data never leaves the userโ€™s device, preserving user privacy. You can try them out right now using these links:
These packages are also available as part of MediaPipe, a library for building multimodal perception pipelines: We hope real time face and hand tracking will enable new modes of interactivity. For example, facial geometry location is the basis for classifying expressions, and hand tracking is the first step for gesture recognition. We're excited to see how applications with such capabilities will push the boundaries of interactivity and accessibility on the web.

Deep dive: Facemesh

The facemesh package infers approximate 3D facial surface geometry from an image or video stream, requiring only a single camera input without the need for a depth sensor. This geometry locates features such as the eyes, nose, and lips within the face, including details such as lip contours and the facial silhouette. This information can be used for downstream tasks such as expression classification (but not for identification). Refer to our model card for details on how the model performs across different datasets. This package is also available through MediaPipe.

Performance characteristics

Facemesh is a lightweight package containing only ~3MB of weights, making it ideally suited for real-time inference on a variety of mobile devices. When testing, note that TensorFlow.js also provides several different backends to choose from, including WebGL and WebAssembly (WASM) with XNNPACK for devices with lower-end GPU's. The table below shows how the package performs across a few different devices and TensorFlow.js backends: The table shows how the package performs across different devices and TensorFlow.js backends

Installation

There are two ways to install the facemesh package:
  1. Through NPM:
    import * as facemesh from '@tensorflow-models/facemesh;
  2. Through script tags:
    <script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-core"></script>
    <script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-converter"></script>
    <script src="https://cdn.jsdelivr.net/npm/@tensorflow-models/facemesh"></script>

Usage

Once the package is installed, you only need to load the model weights and pass in an image to start detecting facial landmarks:
// Load the MediaPipe facemesh model assets.
const model = await facemesh.load();

// Pass in a video stream to the model to obtain
// an array of detected faces from the MediaPipe graph.
const video = document.querySelector("video");
const faces = await model.estimateFaces(video);

// Each face object contains a `scaledMesh` property,
// which is an array of 468 landmarks.
faces.forEach(face => console.log(face.scaledMesh));
The input to estimateFaces can be a video, a static image, or even an ImageData interface for use in node.js pipelines. Facemesh then returns an array of prediction objects for the faces in the input, which include information about each face (e.g. a confidence score, and the locations of 468 landmarks within the face). Here is a sample prediction object:
{
faceInViewConfidence: 1,
boundingBox: {
topLeft: [232.28, 145.26], // [x, y]
bottomRight: [449.75, 308.36],
},
mesh: [
[92.07, 119.49, -17.54], // [x, y, z]
[91.97, 102.52, -30.54],
...
],
scaledMesh: [
[322.32, 297.58, -17.54],
[322.18, 263.95, -30.54]
],
annotations: {
silhouette: [
[326.19, 124.72, -3.82],
[351.06, 126.30, -3.00],
...
],
...
}
}
Refer to our README for more details about the API.

Deep dive: Handpose

The handpose package detects hands in an input image or video stream, and returns twenty-one 3-dimensional landmarks locating features within each hand. Such landmarks include the locations of each finger joint and the palm. In August 2019, we released the model through MediaPipe - you can find more information about the model architecture in our blogpost accompanying the release. Refer to our model card for details on how handpose performs across different datasets. This package is also available through MediaPipe.

Performance characteristics

Handpose is a relatively lightweight package consisting of ~12MB weights, making it suitable for real-time inference. The table below shows how the package performs across different devices: table showing how the package performs across different devices

Installation

There are two ways to install the handpose package.
  1. Through NPM:
    import * as handtrack from '@tensorflow-models/handpose;
  2. Through script tags:
    <script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-core"></script>
    <script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-converter"></script>
    <script src="https://cdn.jsdelivr.net/npm/@tensorflow-models/handpose"></script>

Usage

Once the package is installed, you just need to load the model weights and pass in an image to start tracking hand landmarks:
// Load the MediaPipe handpose model assets.
const model = await handpose.load();

// Pass in a video stream to the model to obtain
// a prediction from the MediaPipe graph.
const video = document.querySelector("video");
const hands = await model.estimateHands(video);

// Each hand object contains a `landmarks` property,
// which is an array of 21 3-D landmarks.
hands.forEach(hand => console.log(hand.landmarks));
As with facemesh, the input to estimateHands can be a video, a static image, or an ImageData interface. The package then returns an array of objects describing hands in the input. Here is a sample prediction object:
{
handInViewConfidence: 1,
boundingBox: {
topLeft: [162.91, -17.42], // [x, y]
bottomRight: [548.56, 368.23],
},
landmarks: [
[472.52, 298.59, 0.00], // [x, y, z]
[412.80, 315.64, -6.18],
...
],
annotations: {
indexFinger: [
[412.80, 315.64, -6.18],
[350.02, 298.38, -7.14],
...
],
...
}
}
Refer to our README for more details about the API.

Looking ahead

We plan to continue improving facemesh and handpose. We will add support for multi-hand tracking in the near future. We are also always working on speeding up our models, especially on mobile devices. In the past months of development, we have seen performance for facemesh and handpose improve significantly, and we believe this trend will continue. The MediaPipe team is developing more streamlined model architectures, and the TensorFlow.js team is always investigating ways to speed up inference, such as operator fusion. Faster inference will in turn unlock larger, more accurate models for use in real time pipelines.

Next steps

Acknowledgements

We would like to thank the MediaPipe team, who generously shared their original implementations of these packages with us. MediaPipe developed and trained the underlying models, and designed the post-processing graph that brings everything together. ...



๐Ÿ“Œ Face and hand tracking in the browser with MediaPipe and TensorFlow.js


๐Ÿ“ˆ 67.14 Punkte

๐Ÿ“Œ Iris landmark tracking in the browser with MediaPipe and TensorFlow.js


๐Ÿ“ˆ 45.62 Punkte

๐Ÿ“Œ Real-Time Hand Tracking and Gesture Recognition with MediaPipe: Rerun Showcase


๐Ÿ“ˆ 40.57 Punkte

๐Ÿ“Œ High Fidelity Pose Tracking with MediaPipe BlazePose and TensorFlow.js


๐Ÿ“ˆ 40.24 Punkte

๐Ÿ“Œ Training a new model with MediaPipe Model Maker - ML on Android with MediaPipe


๐Ÿ“ˆ 40.01 Punkte

๐Ÿ“Œ Control your Mirru prosthesis with MediaPipe hand tracking


๐Ÿ“ˆ 39.5 Punkte

๐Ÿ“Œ A mini hobby project to control Linux based distros using hand gestures using OpenCV, GTK and Mediapipe.


๐Ÿ“ˆ 32.39 Punkte

๐Ÿ“Œ Body Segmentation with MediaPipe and TensorFlow.js


๐Ÿ“ˆ 32.06 Punkte

๐Ÿ“Œ Large Language Models On-Device with MediaPipe and TensorFlow Lite


๐Ÿ“ˆ 32.06 Punkte

๐Ÿ“Œ Demo: Gemma on-device with MediaPipe and TensorFlow Lite


๐Ÿ“ˆ 32.06 Punkte

๐Ÿ“Œ Large Language Models On-Device with MediaPipe and TensorFlow Lite


๐Ÿ“ˆ 32.06 Punkte

๐Ÿ“Œ Getting started with hand landmark detection for web using MediaPipe Solutions


๐Ÿ“ˆ 31.32 Punkte

๐Ÿ“Œ Monado OpenXR hand tracking: hand-waving our way towards a first attempt


๐Ÿ“ˆ 30.81 Punkte

๐Ÿ“Œ Yoha: Write in thin air with custom hand tracking - Made with TensorFlow.js


๐Ÿ“ˆ 30.48 Punkte

๐Ÿ“Œ Object Detection and Tracking using MediaPipe


๐Ÿ“ˆ 29.25 Punkte

๐Ÿ“Œ Instant Motion Tracking with MediaPipe, MySQL 8, ML Kit Pose Detection, and more!


๐Ÿ“ˆ 29.25 Punkte

๐Ÿ“Œ Human Pose Tracking with MediaPipe in 2D and 3D: Rerun Showcase


๐Ÿ“ˆ 29.25 Punkte

๐Ÿ“Œ MediaPipe 3D Face Transform


๐Ÿ“ˆ 29.14 Punkte

๐Ÿ“Œ Implementing face stylization - ML on Android with MediaPipe


๐Ÿ“ˆ 29.14 Punkte

๐Ÿ“Œ Detecting face landmarks in MediaPipe for Raspberry Pi


๐Ÿ“ˆ 29.14 Punkte

๐Ÿ“Œ Getting Started with face detection for web using MediaPipe Solutions


๐Ÿ“ˆ 29.14 Punkte

๐Ÿ“Œ Getting Started with face landmark detection for web using MediaPipe Solutions


๐Ÿ“ˆ 29.14 Punkte

๐Ÿ“Œ Motion based parallax with face tracking - Made with TensorFlow.js


๐Ÿ“ˆ 28.29 Punkte

๐Ÿ“Œ Instant Motion Tracking with MediaPipe


๐Ÿ“ˆ 28.18 Punkte

๐Ÿ“Œ How face IDs works? Oneplus face unlock v iPhone Face ID v Galaxy iris scanner: Which is more secure?


๐Ÿ“ˆ 27.4 Punkte

๐Ÿ“Œ Check out r/MechanicalDesign and help us grow the community! Reverse engineering and mechanical design often go hand in hand.


๐Ÿ“ˆ 24.77 Punkte

๐Ÿ“Œ Online Services and Hospitality Security Go Hand in Hand


๐Ÿ“ˆ 23.7 Punkte

๐Ÿ“Œ Security and Technology Literacy Go Hand in Hand


๐Ÿ“ˆ 23.7 Punkte

๐Ÿ“Œ Security and Technology Literacy Go Hand in Hand


๐Ÿ“ˆ 23.7 Punkte

๐Ÿ“Œ Online Services and Hospitality Security Go Hand in Hand


๐Ÿ“ˆ 23.7 Punkte

๐Ÿ“Œ Blockchain And Digital Transformation Go Hand In Hand


๐Ÿ“ˆ 23.7 Punkte

๐Ÿ“Œ Why Accuracy and Facial Recognition go Hand in Hand?


๐Ÿ“ˆ 23.7 Punkte

๐Ÿ“Œ Cyberwar and Cybercrime Go Hand in Hand


๐Ÿ“ˆ 23.7 Punkte

๐Ÿ“Œ Identity Security Needs Humans and AI Working Hand in Hand


๐Ÿ“ˆ 23.7 Punkte











matomo