Conventional blanket spraying misuses pesticides and fertilizers, driving pollution and exposing farm workers to hazardous chemicals. Smart Droplets aims to flip that script: fusing autonomous retrofit tractors with Direct Injection Systems (DIS), AI models, and a Digital Farm Twin to spray only where needed. Your mission in this hackathon directly fuels that vision—detecting apple scab symptoms at leaf/fruit level so the sprayer can make precise, traceable, Green-Deal-friendly decisions in the field.
- Build and improve a semantic segmentation pipeline that finds Scab symptoms in apple-tree imagery.
- Use the provided train/val/test splits; the test masks are hidden.
- Start from the working baseline code (provided) and improve it.
- Submit a link containing (i) the folder of predicted masks (one per test image) generated by your pipeline; (ii) your impoved code; (iii) and a brief report.
-
Images: RGB crop images of apple leaves/fruits in orchard conditions.
-
Masks: binary segmentation:
- Class
0: background / non-scab - Class
1: scab lesion
- Class
-
Splits:
data/ train/ images/ *.png|*.JPG masks/ *.png (uint8, values {0,1}) val/ images/ masks/ test/ images/ # no masks here -
Image sizes are 512 x 512; masks are single-channel (
H×W, values in {0,1}). -
Do not alter the test folder contents.
Color convention for visualization (recommended, not required):
- overlay class
1in red with alpha≈0.4.
Included features (baseline):
- PyTorch dataloaders (train/val/test), basic transforms.
- A UNet-like model with
CrossEntropyLoss. - Simple training loop and a basic inference script.
- IoU evaluation on val.
- Where to execute it?
- Google Colab: https://colab.research.google.com/
- Kaggle Notebooks: https://www.kaggle.com/code
- Amazon SageMaker Studio Lab: https://aws.amazon.com/sagemaker/
- Paperspace Gradient: https://www.hyperstack.cloud/
- Microsoft Azure Notebooks: https://visualstudio.microsoft.com/vs/features/notebooks-at-microsoft/
- Locally (More challenging)
This documentation outlines the steps required to clone the repository, install dependencies, and prepare the dataset for the computer vision project within a notebook environment.
Execute this shell command to clone the project repository from GitHub into your current environment.
!git clone https://github.com/AUAgroup/smart_droplets-hackathon-computer_visionChange the current working directory to the newly cloned repository folder. The magic command %cd is specific to IPython environments like Colab.
%cd smart_droplets-hackathon-computer_visionInstall all necessary Python packages listed in the requirements.txt file.
!pip install -r requirements.txtAfter installing packages, especially if they involve core libraries like PyTorch or TensorFlow, it is often required to restart the runtime session to ensure all modules are loaded correctly.
Action Required: Go to the Runtime menu in Google Colab and select "Restart session" (or similar option in your environment).
Execute the shell command to unzip the compressed dataset file. The path provided is typical for files uploaded or stored in the Colab /content/ directory.
!unzip /content/smart_droplets-hackathon-computer_vision/smart_droplets-scab_hackathon-2_classes-checked-patched_512-splits.zipThe final step is to ensure the primary execution script is in place.
Action Required: Copy and paste the contents of the main.py file (or any other primary script) into a new file named main.py within the /content/smart_droplets-hackathon-computer_vision/ directory, or into a new code cell in your notebook, depending on your workflow.
- External data: Not allowed (no extra images or labels). Public pretraining on ImageNet is OK.
- Pretrained weights: OK if from standard computer-vision backbones (e.g., ImageNet).
- Leakage: Do not use test images for training, hyper-tuning, or augmentation fitting.
- Automation: Your pipeline must run end-to-end from a single command.
- Time/Compute: Assume 1 GPU (e.g., 16 GB) + 8 vCPUs. Optimize accordingly.
- Fair play: Respect licenses; attribute any borrowed code.
-
Path:
submission/pred_masks/ -
One file per test image, same base filename with
_mask.pngsuffix (uint8, {0,1}). Example:test/images/IMG_0123.jpg -> submission/pred_masks/IMG_0123_mask.png -
Size must match corresponding test image (512x512).
- Approach: model, loss, augmentations, post-processing.
- Limitations & next steps (how it helps Smart Droplets/Green Deal).
Primary metric on hidden test set:
- IoU (Jaccard) for class 1 (Scab) — higher is better.
Overall score (100 pts):
- 60 pts — Test IoU (Scab).
- First position (Highest Test IoU): 60 pts
- Second position (Second Highest Test IoU): 50 pts
- Thrid position (Third Highest Test IoU): 40 pts
- Rest: 30 pts.
- 25 pts — Improvements in the provided code.
- Data: augmentations for small lesions (random resized crop, flips, rotation, color jitter, Gaussian noise), tile/patch strategy for high-res images, mixup/cutmix for segmentation (5 points).
- Loss: class imbalance handling—BCEWithLogits + Dice, Focal,
or Tversky; consider combo losses (e.g.,
0.5*BCE + 0.5*Dice) (5 points). - Architecture: try stronger encoders (e.g., ResNet/ConvNeXt backbones), DeepLabV3+, UNet++, or lightweight models for speed (5 points).
- Optimization: AdamW, one-cycle or cosine LR, early stopping, gradient clipping, AMP (mixed precision) (5 points).
- Inference: test-time augmentation (TTA), sliding window for large images, model ensembling (if time) (5 points).
- 15 pts — Quality of the report.
-
Can we change image sizes? Yes, but preserve aspect ratio during inference or resize masks back correctly.
-
Allowed libraries?
- segmentation_models.pytorch: https://segmentation-modelspytorch.readthedocs.io/en/latest/
- Timm: https://timm.fast.ai/
- https://smp.readthedocs.io/en/latest/encoders_timm.html
- Albumentations: https://albumentations.ai/
- PyTorch: https://pytorch.org/
- Ultralytics: https://docs.ultralytics.com/
-
Ensembles? Allowed if runtime remains reasonable.
- After the execution finishes, you will see this message.
- Create a new cell, copy-paste and run this command:
!zip -r /content/smart_droplets-hackathon-computer_vision/smart_droplets-scab_hackathon-2_classes-checked-patched_512-splits/test/pred_masks.zip /content/smart_droplets-hackathon-computer_vision/smart_droplets-scab_hackathon-2_classes-checked-patched_512-splits/test/pred_masks- Download the new zip file:
Your work here is not just about a leaderboard. It’s a concrete step toward on-farm autonomy: turning pixels → lesions → precise droplets with lower chemicals, lower exposure, and lower footprint—exactly what Smart Droplets is about. Good luck and have fun! 🍏💧