عرض الفئات

Google Teachable Machine – Import Pre-Trained AI Models into Scratch

🏫 Google Teachable Machine – Import Pre-Trained AI Models into Scratch #

The Google Teachable Machine extension brings advanced AI models from Google’s Teachable Machine website directly into Scratch.
Train powerful image, sound, or pose recognition models online – then import them into your Scratch projects with a single URL.
Professional-grade machine-learning workflows made simple and accessible for students, teachers, and creators.


🌟 ملخص #

  • Three Model Types: Import image, sound, or pose classification models trained in Teachable Machine.
  • Unlimited Labels: Use any number of custom classes – no 10-label limit.
  • Cloud Training: Take advantage of Google’s infrastructure for high-accuracy models.
  • Easy Import: Paste your model’s shareable URL into Scratch for instant use.
  • Real-Time Classification: Get live predictions as the model recognizes images, sounds, or poses.
  • Camera & Video Controls: Show, hide, or flip the live preview to fit your setup.

Key Features #

  • Supports all three Teachable Machine model types (Image, Sound, Pose).
  • Unlimited labels per model.
  • Import via shareable URL or upload files.
  • Confidence scores for every label.
  • Adjustable classification interval and confidence threshold.
  • Runs entirely in your browser after loading – fast and efficient.

🚀 How to Use #

Step 1 – Train Your Model on Teachable Machine #

  1. Go to teachablemachine.withgoogle.com.
  2. Choose a project type: Image, Audio, or Pose.
  3. Create your classes (labels), e.g., “Cat”, “Dog”, “Bird”.
  4. Add training samples for each class:
    • Images: Upload photos or capture via webcam.
    • Sounds: Record short audio clips.
    • Poses: Capture body positions using your webcam.
  5. Click Train Model and wait for training to complete (1–5 minutes).
  6. Test your model in the Preview section to ensure accuracy.
  7. Click Export ModelUpload my modelUpload.
  8. Copy the shareable URL (for example: https://teachablemachine.withgoogle.com/models/abc123/).

Step 2 – Import Your Model into Scratch #

  1. Open pishi.ai/play.
  2. Select the Google Teachable Machine extension.
  3. Allow camera or microphone access if prompted (depending on model type).
  4. Use one of these setup blocks:
    • set image model url: [YOUR_URL]
    • set sound model url: [YOUR_URL]
    • set pose model url: [YOUR_URL]
  5. Wait for the loading indicator – classification starts automatically once loaded.
  6. Use the detection blocks to make your sprites react to recognized labels.

Tips

  • Train each label with 20–50 diverse examples for reliable results.
  • Keep lighting and sound conditions consistent with training.
  • Test your model thoroughly on Teachable Machine before importing.
  • Adjust classification intervals (100–250 ms) for performance on older devices.

🧱 Blocks and Functions #

 

🖼️ Image Classification #

set image model url: [URL]

Loads an image classification model from the provided Teachable Machine URL.
[URL]: paste your Teachable Machine model URL (e.g., https://teachablemachine.withgoogle.com/models/u87itobc/)

What it does:

  • Downloads and loads the model into Scratch.
  • Starts continuous image classification from the camera or stage.
  • Model stays loaded until you set a different URL or reload the project.

 

image label
Reports the label name of the most confidently detected image class.
Returns empty if no label meets the minimum confidence threshold.

Example: If your model detects “Cat” with high confidence, this block reports “Cat”.

 

when [LABEL] image detected
Hat block that triggers when the specified image label is detected above the minimum confidence threshold.
[LABEL]: select a label from your model or choose “any” to trigger on any detection.

Use “any” to react when anything is detected with confidence.

 

[LABEL] image detected
Boolean block that returns true when the specified image label is currently detected.
[LABEL]: select a label from your model or “any”.

 

confidence of image [LABEL]
Reports the confidence score (0–1) for the specified image label.
Higher values = more certain detection.

Example: confidence of image “Dog” might return 0.95 (95% confident).

 


🔊 Sound Classification #

set sound model url: [URL]

Loads a sound classification model from the provided Teachable Machine URL.
[URL]: paste your Teachable Machine audio model URL.

What it does:

  • Downloads and loads the audio model into Scratch.
  • Starts continuous sound classification from your microphone.
  • Listens and classifies audio in real time.

 

sound label
Reports the label name of the most confidently detected sound class.
Returns empty if no sound meets the minimum confidence threshold.

Example: If your model hears “Clap”, this block reports “Clap”.

ملاحظة: Background noise is automatically filtered from detection results.

 

when [LABEL] sound detected
Hat block that triggers when the specified sound label is detected above the minimum confidence threshold.
[LABEL]: select a label from your model or choose “any”.

ملاحظة: Background noise labels are excluded from “any” triggers.

 

[LABEL] sound detected
Boolean block that returns true when the specified sound label is currently detected.
[LABEL]: select a label from your model or “any”.

 

confidence of sound [LABEL]
Reports the confidence score (0–1) for the specified sound label.

 


🕺 Pose Classification #

set pose model url: [URL]

Loads a pose classification model from the provided Teachable Machine URL.
[URL]: paste your Teachable Machine pose model URL.

What it does:

  • Downloads and loads the pose model into Scratch.
  • Starts continuous pose classification from the camera.
  • Detects body position and classifies based on your trained poses.

 

pose label
Reports the label name of the most confidently detected pose class.
Returns empty if no pose meets the minimum confidence threshold.

Example: If your model detects “Arms Up”, this block reports “Arms Up”.

 

when [LABEL] pose detected
Hat block that triggers when the specified pose label is detected above the minimum confidence threshold.
[LABEL]: select a label from your model or choose “any”.

 

[LABEL] pose detected
Boolean block that returns true when the specified pose label is currently detected.
[LABEL]: select a label from your model or “any”.

 

confidence of pose [LABEL]
Reports the confidence score (0–1) for the specified pose label.

 


🎯 Confidence (shared blocks) #

Important Note ([TYPE]) – These blocks include a [TYPE] dropdown that lets you select the AI category – image, sound, or pose. This determines which model the block controls or reports data from.

set [TYPE] minimum confidence [CONFIDENCE]

Sets the minimum confidence threshold (0–1) for detections in the selected category.

  • [TYPE]: selects which AI detector the confidence applies to – from the dropdown menu choose: image, sound, or pose.
  • [CONFIDENCE]: sets the threshold value between 0 and 1.

Use this block to control how certain the AI must be before reporting a detection:

  • Default: 0.5 – balanced performance.
  • Raise to 0.6–0.8: for stricter, more accurate detection (fewer false positives).
  • Lower to 0.3–0.4: for more sensitivity (may increase false positives).

[TYPE] minimum confidence

Reports the current confidence threshold value for the selected detector (image, sound, or pose).


⚙️ Classification Controls (shared blocks) #

Important Note ([TYPE]) – These blocks include a [TYPE] dropdown that lets you select the AI category – image, sound, or pose. This determines which model the block controls or reports data from.

  • classify [INTERVAL] - Choose how often detection runs:
    • every time this block runs
    • continuous, without delay
    • continuous, every 50–2500 ms
  • turn classification [on/off] - start or stop continuous detection.
  • classification interval - reports the current interval in milliseconds.
  • continuous classification - reports continuous detection is “on” or “off”.
  • select input image [camera/stage] - choose camera or stage.
  • input image - reports the active input source.

🎥 Video Controls (shared blocks) #

Important Note ([TYPE]) – These blocks include a [TYPE] dropdown that lets you select the AI category – image, sound, or pose. This determines which model the block controls or reports data from.

  • classify [INTERVAL] - Choose how often detection runs:
    • every time this block runs
    • continuous, without delay
    • continuous, every 50–2500 ms
  • turn classification [on/off] - start or stop continuous detection.
  • classification interval - reports the current interval in milliseconds.
  • continuous classification - reports continuous detection is “on” or “off”.
  • select input image [camera/stage] - choose camera or stage.
  • input image - reports the active input source.

🎓 Educational Uses #

  • Learn Professional ML Workflows: Experience the full pipeline – collect data, train, test, deploy.
  • Teach deep-learning and transfer-learning concepts using Google’s tools.
  • Explore model evaluation – accuracy, confidence, false positives/negatives.
  • Build collaborative class projects using shareable model URLs.
  • Introduce students to real-world AI tools used in industry.

🎮 Example Projects #

  • Animal Identifier: Recognize animals – educational quiz game.
  • Music Detector: Identify piano, guitar, or drums from sound.
  • Yoga Coach: Train pose models for yoga positions – live feedback.
  • Sign Language Helper: Recognize ASL hand signs – accessibility training.
  • Emotion Classifier: Detect facial expressions like happy or surprised.
  • Voice Command Game: “Jump”, “Duck”, “Left”, “Right” – play hands-free!
  • Recycling Sorter: Classify materials like plastic, paper, metal, glass.

🧩 Try it yourself: pishi.ai/play


🔧 Tips and Troubleshooting (shared blocks) #

Important Note ([TYPE]) – These blocks include a [TYPE] dropdown that lets you select the AI category – image, sound, or pose. This determines which model the block controls or reports data from.

  • No camera?
    • Make sure your camera is connected and browser permission is allowed.
    • If the camera is blocked, enable it in your browser’s site settings and reload the page.
    • During extension load, if no cameras are detected, the input will automatically switch to the stage image so you can still test FaceMesh features.
  • No detection?
    continuous classification: Use this reporter to see if classification is active.
    • If it is active, improve lighting and face the camera directly.
    turn classification [on]: Use this block, if classification is not active, then recheck the classification status with the above reporter.
    • In camera input mode, when the camera is turned off, classification is also stopped - you must turn the video back on or switch input to stage.
    • In stage input mode, the system classifies whatever is visible on the stage - backdrops, sprites, or images. You can turn off the video completely and still process stage images.
    • Stage mode is slower than camera input, so lower your classification interval (e.g., 100–250 ms) for smoother results using this block: classify [INTERVAL]
    • In stage mode, “left” and “right” landmarks are swapped because the stage image is not mirrored - coordinate space represents a real (non-mirrored) view.
    • Classification can also restart automatically when you use blocks such as:
    turn video [on] / classify [INTERVAL] / select camera [CAMERA] / select input image [camera/stage].
  • Flipped view?
    turn video [on-flipped]: Use this to show the camera without mirroring. “on” mirrors like a selfie; “on-flipped” shows real left/right orientation.
  • Laggy or slow?
    Use classification intervals between 100–250 ms or close other browser tabs to reduce processing load.
  • WebGL2 warning?
    Try Firefox, or a newer device that supports WebGL2 graphics acceleration.
  • Analyze stage instead of camera?
    select input image [stage]: Use this to analyze the Scratch stage image instead of a live camera feed.

 

🏫 Teachable Machine Specific Tips #

  • Model not loading? Make sure you clicked “Upload my model” on Teachable Machine and copied the full shareable URL.
  • URL format error? Your link must start with https://teachablemachine.withgoogle.com/models/ and end with a slash (/).
  • Low accuracy in Scratch? Verify your model’s performance on Teachable Machine first – ensure lighting and sound match your training setup.
  • Model not updating? Set the model URL again or reload the Scratch page to refresh.
  • Sound model not detecting? Check microphone permissions and use sounds similar to your training samples.
  • Pose model confused? Ensure your full upper body is visible – same conditions as during training.
  • Too many false positives? Increase minimum confidence (0.7–0.8) or retrain with more diverse examples.
  • Testing multiple models? Use the “set [type] model url” block to switch – changes apply instantly.
  • Sharing models? Share your Teachable Machine URL so others can import and use it in their projects.
  • Model too large? Reduce image resolution or training samples before exporting from Teachable Machine.

🔒 Privacy & Safety #

  • Models are hosted on Google’s servers when uploaded.
  • Once loaded into Scratch, all classification runs locally in your browser.
  • Camera and microphone data are processed locally – never sent to servers during use.
  • Your Teachable Machine training data follows Google’s privacy policy.
  • Shareable URLs are public – avoid posting sensitive model links.

🧪 Technical Info #

  • Model Source: Google Teachable Machine (teachablemachine.withgoogle.com)
  • Framework: TensorFlow.js – runs fully in-browser with WebGL acceleration
  • Model Types: Image, Sound, Pose classification
  • Labels: Unlimited (based on model)
  • Inputs: Microphone (sound), Camera or Stage canvas (image/pose)
  • Default Confidence: 0.5 (adjustable 0–1)
  • Model Format: TensorFlow.js graph model + metadata
  • Requires: WebGL2 for best performance

🔗 Related Extensions #

  • 🖼️ Image Trainer – train custom models directly inside Scratch
  • 😎 شبکة الوجه – detect facial landmarks
  • 🖐️ تتبع اليد – detect hand landmarks
  • 🕺 تتبع الوضعية – track full-body pose

🧩 Comparison: Image Trainer vs Google Teachable Machine Extension #

Both extensions let you use custom-trained AI models in Scratch, but they differ in where and how the training happens.

Feature آلة جوجل التعليمية Image Trainer
Training Location Online via Teachable Machine website. Directly inside Scratch – instant and integrated.
Model Types Image, Sound, and Pose classification. Image classification only.
Labels Supported Unlimited (depends on model). 10 labels (1–10).
Training Speed Requires cloud training and export. Instant in-browser training after first load.
Model Sharing Share via URLs – anyone can import models. Download or upload JSON files.
Training Infrastructure Google Cloud compute – handles larger datasets. Local browser compute – faster iteration, lower capacity.
Customization Full deep-learning architectures and settings. MobileNet v2 + KNN – lightweight and fast.
Best For Advanced multi-label projects and model sharing. Quick classroom training and demos.
Privacy Training hosted by Google; classification local. All training and classification fully local.
Educational Focus Professional ML workflow – train, export, deploy. Hands-on ML learning – real-time results.

↔ مرر لليسار أو اليمين لعرض الجدول الكامل على الهاتف المحمول

 

💡 Why Choose Teachable Machine? #

Teachable Machine is ideal for sophisticated, shareable AI projects.

  • Train Image, Sound, and Pose models – three types in one.
  • Unlimited labels with powerful cloud training.
  • Share models easily via public URLs.
  • Perfect for advanced coursework and collaboration.
  • Introduces real-world ML workflows and tools.

💡 Why Choose Image Trainer? #

Image Trainer is perfect for rapid experimentation inside Scratch.

  • Instant feedback loop – no external sites or accounts.
  • Fully offline and private – all data stays on your device.
  • Ideal for classrooms, workshops, and beginner ML projects.

In short:
Teachable Machine = professional workflow for complex, shareable models (image/sound/pose).
Image Trainer = instant, integrated, beginner-friendly ML inside Scratch (image only).


📚 Learn More #


Scroll to Top