Project Santa Cruz Part 2: Capturing Images to Train a Model

After three very busy weeks of trying to get a few things done on other projects, I had a chance to circle back around for project Santa Cruz, also known as Azure Precept. (If you haven’t read my Part 1 post in this series, check that out. It will give more context to this post.) After getting the hardware from Microsoft, I set it up using the wizard on the device. The next step was to capture images to train a model.

This part would be somewhat of a wildcard because there were some unknown variables associated with it. I really wasn’t even sure that I would be able to use the device to get images of birds on the feeder. In any case, after thinking about how to mount it, I settled on just a hack to see if I could even make it work. So, I yanked up a tiki-torch stand from another place in my yard, zip-tied the hardware to it, then planted it in the ground near my birdfeeder to see if I could get pictures on the device. After some tweaking, I got the device set up near enough to the birdfeeder to capture what I hope to be detailed enough images to use for an AI classification.

View From the Feeder
View From the Feeder
Dev Kit on a Pole
Dev Kit on a Pole

The cool thing about this device is that it has a built-in server so that I can connect to the device and stream the output from the device back to my computer or phone. I used this to help set up the camera at the appropriate angle to capture images.

Built in Webcam
Built-in Webcam

The next step was to use the Azure Portal to capture images, so first, I needed to verify that my device was working correctly and everything was good on the Azure side. After connecting to Azure, I was able to see my device in the Azure portal. I could see the device was connected and snapped a sample image from the camera, and everything looked like it was good to go cloud side.

Devices in the Azure Portal
Devices in the Azure Portal

After making sure the device was connected, I needed to set up a new project inside of Azure Precept Studio. I chose a vision project, as that is what I am working with, and I’m looking for object detection in this project. My objects are different species of birds that I have that will visit the feeder.

New Project
New Project

To train a model though, I needed to collect a bunch of images. Azure Precept has a built-in utility that allows you to capture new images from the device on an interval. I selected one frame every 10 seconds for up to 1000 images, in hopes that I could get enough birds to visit the feeder in that time frame to build an image set I can use for training. At that interval, the device will collect images for almost three hours.

Image Capture
Image Capture

In just sampling the images, I can see that it has already captured a few, which tells me that this might work. The next question though is to train a model with these images after tagging them to see if it’s indeed enough data to make the model work.

Image From Dataset
Image From Dataset

For now, I’m still collecting data. The next post will be about culling the junk data and working with the captured images to identify the species and tag the images accordingly. Stay tuned for that post!

Stay Informed

Sign up for the latest blogs, events, and insights.

We deliver solutions that accelerate the value of Azure.
Ready to experience the full power of Microsoft Azure?

Atmosera is thrilled to announce that we have been named GitHub AI Partner of the Year.

X