Creating a health dashboard by hacking intelligence into an AOpen (the ‘A’ for ancient) monitor; with metrics aggregated by Graphite and beautifully displayed with Grafana.
I’d like to say that the build went on without a hitch, but I ran into two issues. Not project critical issues, but troublesome nonetheless.
Let’s talk about them, and how I’ve managed to successfully resolve some dastardly slow performance, and fail to resolve high static audio issues.
Troubleshooting poor processing performance
Super excited to get the system going, I dd’d Rasbian Buster (desktop) onto a 16 GB card, and completed setup for my Pi Zero.
Immediately after boot, I noticed how sluggish the system was. Some symptoms I’ve experienced include:
Desktop environment input lag (mouse movement, key presses)
Long time to load applications
SSH authentication takes abnormally long (reached 10 seconds at one point)
SCPing at a rate of 10 kb/s over Ethernet
CPU always at 100% utilization
Half of me thought it had something to do with my configuration. However, these symptoms were prevalent since the first boot into Raspbian. Furthermore, a majority of my configuration was network related, nothing that would cripple other system components.
The other half of me thought it had to do with my Pi Zero. I’ve done some reading online and found others also experiencing poor system performance with Raspbian Buster (desktop) on the Pi Zero. Perhaps a full desktop environment image of Raspbian was asking too much?
Well, I didn’t let that stop me, and took to exploring a wide breadth of light-weight OSes for the Pi, including PiCore, Raspup, and DietPi.
I’ve wiped this MicroSD card enough times to give it an identity crisis.
I eventually got tired of testing out different distributions, and defaulted over to Raspbian Buster Lite, where I installed LXDE along with Midori, Epiphany, and Chromium to decide which would serve as my kiosk browser. The final image after setup served to be the most responsive but browsing a single page was still taking forever (>60 seconds to load alvinr.ca) – and even then, the load wasn’t complete.
The more numbers I put together, the stranger the situation looked. It was here when I put software aside, and started looking at some hardware specifications
The Pi Zero runs a 1GHz single-core CPU with 512MB RAM. My CPU utilization on a fresh installation of Raspbian Lite was always near 100%, while RAM usage on my own image never exceeded ~200 MB.
The entire unit remains quite cool while operating, barely deviating from room temperature – it wasn’t like I was giving it a reason to, nor the means to with a 5V/1A supply.
Huh.
Taking a look at the power requirements for the Pi Zero made me realize I was using a supply below recommended. This was taking into account my ‘desktop’ setup, with a USB/Ethernet hub housing a wireless keyboard/mouse dongle.
Could it be? The Pi was throttling CPU to prevent brownout?
I’m telling you – if there’s anything I’ve learned as a FIRST alumni from 4 years of competitive FIRST Robotics, it’s to know your power draw, and always charge your damn batteries.
As for our system here, we know the power draw. Some further research for the Raspberry Pi yielded the following documentation – recommending a supply of 2.5A.
I’ve got something close enough – a Samsung S7 5V/2A charger. I swear, if I plug in this supply and it turns out that was the problem all along…
Well.
After an hour of playing around with it (and several sanity-driven reboots later), I now experience U N P A R A L L E L E D performance. Really though, performance has improved drastically.
CPU utilization with Chromium running sits at around 60%, and it only takes 20 seconds to complete load alvinr.ca. Network performance was better as well, managing to perform an SCP transfer at ~8 MBps – close enough to the 10/100 Ethernet adapter capabilities.
All that’s left now is to replace the existing 5V/1A adapter with this one.
Sweet – let’s move on.
Troubleshooting high static on the Raspberry Pi
It’s the end of the final assembly – we’ve got a better power supply, wires and components have been neatly organized, and everything is secured down.
I couldn’t even make it 10 seconds in – the sound was absolutely jarring, a rough mix of tones and high static.
Not the song mind you – I’ve definitely broke something along the way.
Let’s get this out of the way – I haven’t resolved this yet, nor have I pinned down an exact cause. I’m moving forward with the project, considering all the time invested into troubleshooting (and since audio was an optional feature).
However, if you manage to run into something similar, here are some things I’ve tried:
Re-seat connectors
Use a shielded vs. non-shielded AUX cable
Ground Raspberry PI
Different audio output device
Different audio track, different audio player, even different Linux distros
At the least; I’ve narrowed it down to a hardware fault, either with the Pi, or more likely, the Mini HDMI to VGA adapter (since it splits audio to AUX output).
A little light and some hot glue digging eventually led me to finding a possible fault, present in this very adapter.
While I don’t know what purpose this wire serves, I’ll leave the blame on it for now, until I find the time to either investigate it further, or purchase another adapter.
A journey in extending a ZFS volume pool on a FreeNAS virtual machine after changing vdisk size.
Skip to the procedure section to dig right into it
Background
Virtualizing I/O is perhaps one of the best ways to shoot yourself in the foot. Or IS it?
You’re taking a piece of software of which its very namesake involves large I/O operations, and throwing it behind layers of abstraction. When an I/O request is made from a VM, the hypervisor must both ascertain the source, and gain access to the destination by translating the abstracted storage location to a physical location – all while remaining secure.
Not to mention;
Your I/O heavy VM is contending for bandwidth across an already busy storage driver, to some storage which may also be shared by other VMs.
Furthermore, if you’re not even running a bare metal hypervisor and are using applications such as VMware Workstation or Oracle Virtualbox, then performance degrades further – since your hypervisor is now contending against the host operating system for I/O.
Why then, would one want to virtualize NAS?
As much of a sin this seems to be, there are some clear and powerful benefits to doing so.
Perhaps most obvious, you’re given the flexibility provided by any virtual system; simplified management, accurate hardware performance metrics, and one-click customization should you wish to change CPU/RAM/disks.
This ability to change the size of a VM’s disk is quite trivial. Getting FreeNAS to realize the added space is another story altogether.
Furthermore, there are ways to eliminate I/O bottlenecks by using PCI pass through, giving VM access to raw disk. Do note that certain hypervisor features are unavailable when doing so – taking VMware for example, where you will lose fault tolerance, HA, DRS, snapshots, and a few more.
And then there’s laziness factor; why go through the effort of setting up a whole new, dedicated machine just for NAS when I can just spin up a FreeNAS instance in minutes?
My use case is personal, and while certain pros/cons hold true between lab and production environments, research is key in determining whether NAS virtualization is best for you.
Procedure – Extending FreeNAS Pool Size after vDisk size change
When you increase the size of a virtual disk, it’s up to guest to resize/grow partitions to utilize the free space. This is what the following procedure will cover; how to properly resize a volume after changing one of its disks’ dimensions – and having FreeNAS update pool size accordingly. Note that the following procedure applies to FreeNAS 9.2.1 – however, it remains relevant for newer FreeNAS versions.
Step 1. Log into webmin, detach the volume being resized
Log into your FreeNAS web administration portal. By detaching this volume, we can safely perform partition changes to disk(s) within. UN-CHECK the destroy data and delete shares options.
Step 2. Shut down FreeNAS
Ensures that there are no lingering locks to this volume and/or its disks before we grow our drive(s).
Step 3. Grow drives
Grow vDisks using whatever means available – vboxmanage, vmkfstools, vmware-vdiskmanager, etc.
Step 4. Power on FreeNAS, and enter shell
Step 5. Enter the following commands in order:
zpool status
Retrieve the name(s) of partitions for each pool. One of these partitions will correlate to the disk(s) expanded. NOTE down the gptid for YOUR pool – we will use this later.
glabel status
Resolves a partition name to drive name (Components column). NOTE down the Components name for the partition identified previously – we will use this later.
gpart resize -i 2 /dev/da0
Replace da0 with drive name (Components name) identified previously. This command will resize the specified disk. On completion, a ‘resize successful’ message is given.
Step 6. Log into webmin, and auto import the pool
On successful auto import, you will see your volume under Storage > Active Volumes
Step 7. Bring device back online and expand pool size
Return back to your FreeNAS shell, and enter this final command to both bring the device online, and automatically update the pool size.
Replace poolname with the component name from zpoolstatus. Replace some-long-gptid with the partition name from glabel status.
The ‘-e’ flag is meant to be used when a smaller disk has been replaced by a larger disk – which to the guest, is what we’ve done.
At this point, you should be able to refresh your webmin and note the changes in used/available/size for your particular volume in Storage > Active Volumes. This of course, will also be reflected in any shares active.
Creating a health dashboard by hacking intelligence into an AOpen (the ‘A’ for ancient) monitor; with metrics aggregated by Graphite and beautifully displayed with Grafana.
As promised, let’s dive into some of the modifications made to Audio Control and USB/Ethernet Hub – all part of the master plan.
Audio Control – Moving Audio IN, INside
With our cheap little Mini HDMI to VGA adapter providing audio output from the Pi, all we need to do is feed it into the monitor. Spoon of choice? AUX cable.
Now here’s where it gets a little awkward. With
the unit closed, Audio IN is facing outwards. Meaning we’d have to lead
an AUX outside the unit and plug it into the back, making this odd
protrusion.
Well, that won’t do. One of the key tenets of this project was to ensure the unit is compact, with as many components encapsulated within the monitor chassis itself.
Let’s perform a surgery, and extract the Audio IN (AUX) port and see if we can move it around.
And done. Time to test. Powered on the unit, ensured the AUX cable was connected between our Mini HDMI > VGA adapter and Audio IN on the monitor, and basked in the graininess of Do You Rememberremixed by grey.
Yeah, it was pretty bad. Not because of the mod I assure you (having tested audio on this monitor awhile back), more so because the speakers are terrible.
Anyways, won’t be playing music through this anytime soon – any audio coming through the Pi will be for notifications/alarms.
USB/Ethernet Hub – Mounting
Never thought I’d say it, but for once, this modification was made EASIER by the cheap construction of a part.
The Ethernet/Hub adapter easily came apart (surprise), separating away from its case, allowing me to determine a suitable mounting location.
Deciding to mount it near the bottom for both out-of-sight accessibility came with the added bonus of being closer to the Pi. To mount the hub, I ran one of the leftover hinge screws through the lower-half of the plastic hub case, right back into the frame.
All that was left was to cut a portion of the rear cover out to accommodate the area now taken by the hub, and we were good to go.
And there we go! This wraps up a majority of the hardware. For now.
The next post will cover assembly in more detail, along with any issues we should consider before side-mounting onto my server rack.
Creating a health dashboard by hacking intelligence into an AOpen (the ‘A’ for ancient) monitor; with metrics aggregated by Graphite and beautifully displayed with Grafana.
When it comes to hardware, I’ll be keeping things simple. With the exception of the monitor itself, the other complex subsystems are as follows:
Raspberry Pi Zero
USB to USB/Ethernet hub
The remaining components are internal wires/cables, which includes the Mini HDMI to VGA adapter, and 5V/1A wall adapter.
Internal Component/Wiring Diagram
Legend for above diagram: – Bolded titles are components ADDED to the vanilla monitor system. – Italicized titles are components which had to be modified in some way. – Red lines are cables which had to be modified in some way.
Component boxes nested within other component boxes are either mounted, or exist, on that component.
Let’s talk modifications
Certain cable/component modifications were pretty simple. For instance, the Mini HDMI Cable was folded at one end and held as such via some electrical tape. Other simple modifications included the Generic USB Cable used between the USB/Ethernet Hub and Pi Zero, and the Generic AUX Cable – both of which were looped and tied.
The rest were a little more complex; take the 5V/1A wall adapter for instance. I didn’t bother hacking it open because the case played a valuable role in its mounting point. Therefore to supply AC, I split the input from the Power Distribution Unit to both the unit itself and the 5V/1A wall adapter, connecting directly to the non-polarized plugs and wrapping the connection in shrink wrap.
The USB to USB Micro B cable was drastically shortened to 30cm, and rejoined via small solder joint, and shrink wrapped.
The HDMI to VGA adapter was stripped of its hideously large housing and plugged into the monitor’s VGA cable.
At this point, wiring video would be complete in a traditional monitor. However, this monitor has its VGA cable BUILT-IN. Man, was this an annoying realization. Fortunately, the other end wasn’t soldered onto the main board, and was rather some rather delicate AOpen proprietary connector.
To ensure the integrity of this connector, I ended up splitting and shortening the built-in VGA cable to 30cm as well.
If you’ve ever wondered how all those pins in a VGA interface are sent along a single cable, I highly suggest you split a cable open to learn how. It’s an absolute beast of a cable, this particular one having a woven metal sheathe surrounding 3 coaxial-like cables and 9 wires.
Soldering individual strands was challenging enough with the variety of stranded conductors. Keeping stripped conductors from contacting was a whole new bag of worms. Hot glue came in useful for once, and helped not only hold the splice together, but keep each individual wire in place, preventing contact.
Cables connected, fingers crossed, let’s power it on
IT’S ALIVE!
Thank the stars that this picture excludes the mess behind the monitor 😉 This covers all the core system components. We’ll be wrapping up on hardware soon – man, I can’t wait to put the rear cover back on.
We still need to talk about the USB/Ethernet Hub, and the Audio Control (both of which requiring some modifications), but we’ll leave that to the 4th GraPi post.
Creating a health dashboard by hacking intelligence into an AOpen (the ‘A’ for ancient) monitor; with metrics aggregated by Graphite and beautifully displayed with Grafana.
Let’s get some simple objectives and materials down before I change my mind. That way, I’ll have a post to update when I do.
Requirements
All hardware must be contained within the AOpen monitor.
Monitor should remain mountable.
Minimal power consumption from new hardware.
Minimal heat dissipation from new hardware.
Hardware (compute) should be sufficient to load a live Grafana dashboard.
Audio playback though integrated speakers for alarms (optional).
Creating a health dashboard by hacking intelligence into an AOpen (the ‘A’ for ancient) monitor; with metrics aggregated by Graphite and beautifully displayed with Grafana.
Background
Killing Floor 2 can be pretty fun, especially with a great crew. Not to mention the amazing community, which continues to pump out workshop content such as custom maps. With a crew assembled from players around the world, we casually run games now and then, a solid amount on custom maps.
Well, one of the servers I manage went down for some time. It wasn’t until further investigation that I discovered the host ran out of disk space with all the custom maps we’ve loaded.
It was a minor inconvenience that resulted in some down time. However, it was something that would’ve easily been foreseeable, assuming I was monitoring the host.
This made me realize something; a pretty solid chunk of services I host for both external and internal use are manually monitored.
And by that, I mean I remote into them whenever the fit hits the shan.
Nowadays, tools which help in application monitoring and insight are bountiful. It would’ve been easy enough to just pick one and deploy.
But that would be boring.
So, in combination with the need for some app. monitoring & insights, let’s build our own monitor (literally), which will be mounted on my server rack with the sole purpose of displaying a comprehensive health dashboard.
So, where we headin’?
Over the course of the next few weeks, I’ll be further covering specific goals, documenting the progress already made, ranting on my failures, and generally discussing what I’ve learned – and will learn along the way.
To make following along simple, each post dedicated to this project will not only be categorized grapi, but also have its titled numbered in order of chronology and relevance.
A long time ago, I’ve uploaded a picture of my Purple Leaf Plum during the summer.
Now, after numerous updates and server changes, I tried to access the image only to get thrown a generic PHP error 500. Further investigation shows that it also happened with quite a few older files as well, including some images from some of my old pages.
Et tu, high-school portfolio?
While I could care less about that breadboard schematic back in 11th grade, that purple leaf plum picture was pretty nice.
Therefore, let’s embark on a journey to recover access to my purple leaf plum from a Windows Server machine running WordPress on IIS.
(or for the impatient, a TL;DR: PHP file uploads weren’t inheriting correct permissions)
Symptoms
Browsing directly to files uploaded in the past would result in an err. 500.
Missing images/thumbnails/files.
Unable to edit (e.g. crop images) through WordPress Media Library.
Troubleshooting Journey
500 Internal Server Error server error response code indicates that the server encountered an unexpected condition that prevented it from fulfilling the request.
Knowing what a 500 means is one thing; pin-pointing the cause is a whole new ballpark altogether, considering 500’s can be pretty vague.
In the past, I’ve modified the permissions for alvinr.ca in such a way that I wouldn’t be able to update through wpadmin – only manually. While this gave some odd behavior through wpadmin (asking for FTP credentials when hitting ‘Update Now’), it was expected. Could this error 500 issue be related?
Well, let’s take a look at the permissions for one of our uploads.
Huh. Well, we’re missing some entries; namely Users, IUSR and IIS_IUSRS. Unsure how that happened, but let’s add the permissions and try browsing to it again.
Aha! It works! The Purple Leaf plum is now visible in all its glory.
Now that we’ve resolved that, let’s continue on with our lives. Making another post, uploading another image… and guess what.
It happens again.
Same error 500, and the uploaded file is missing those permissions!
What’s going on here? Who/what keeps nuking necessary permissions to my WordPress uploads?
To better understand this, let’s talk about what exactly happens when you upload a file to WordPress.
Problem
The process of uploading a file to WordPress is handled by PHP. During this process, PHP retrieves the file from the client, placing it in a temporary location, before moving it to wp-content/uploads as dictated by WordPress.
When a file is written to this temporary location by PHP, it inherits permissions associated with that location.
When the file is moved to a WordPress specified location, it’s just that – moved. It STILL retains the permissions from the PHP temporary location.
And since PHP is not ran by a user that owns the file, its permissions cannot be changed after the move.
This explains why new uploads are affected. It also explains possible origins to this problem;
Updates to PHP
Modifications to PHP or your IIS site’s handler mappings
Modifications to the temporary file upload location written to by PHP
Solution
Define a new temporary file upload location for PHP
Add specified users above to temporary file upload location used by PHP (not recommended)
Since the default location for PHP file uploads is C:\Windows\temp , which is used by other applications including the system itself, I’d like to keep its permissions pristine before it causes conflicts, or even worse, opens security holes.
Thus; let’s specify a NEW default temporary file upload location by modifying php.ini (PHP configuration file).
Create a new location to be used for temporary file uploads by PHP Example; C:\inetpub\php-temp
Launch IIS Manager, identify PHP used for site by inspecting Handler Mappings
Navigate to location of php.ini for PHP installation identified previously
Open php.ini, find the upload_tmp_dir configuration item, un-comment by removing the leading ‘;‘, and specify the new location
How to find your PHP configuration (click thumbnails to enlarge image):
1. Select site in IIS Manager > Handler Mappings
2. View Ordered List to determine PHP used by site
3. Sort by path, identify highest priority name for *.php path
4. View Unordered List, and double-click item identified previously
Lessons learned; error 500’s are sometimes more predictable when your web application is WordPress. Also; clean up your handler mappings. May your posts be fresh, and your images always available.
We’re taking the conventional ‘map’ out of navigation with our highly interactive, virtual environment in a pocket – the Campus Navigator.
ATLABS, 2019
During my final year at Sheridan College, I lead team Atlabs on a wild journey in ideating, developing, and presenting the Campus Navigator at the Sheridan College 2019 Capstone Showcase.
The event was outstanding – KUDOS to everyone who partook and shared their projects across both the Software Development & Network Engineering (SDNE) and Mobile Dev. program streams.
As part of the event, a variety of awards were given out as voted on by both participants and judges comprising of alumni, faculty, and industry partners.
Knowing the implications of a boring project chosen with very little interest, our first (and perhaps longest) team exercise was in ideation. The first month was spent on brainstorming – learning more about each other and our interests – so that we may find a project that either aligns with the collective interest, or possesses at least one element that interests each member.
Our earliest brainstorming sessions were perhaps the least technical. They were more or less friend-meets; talking about interests, directions we wanted to take the project, and the severe lack of healthy fast food locally.
Eventually, we managed to narrow down some common archetypes. From here, we directed our discussions more towards the overall project. What kind of problems exist in the world today relating to X? Could we see ourselves doing Y for a year? Will we get arrested for Z?
From here, we worked out three, solid project ideas.
All that was left was to investigate the technology we would need, the expertise of our team, the support available, and project breakdown should we choose that idea to run with.
For Atlabs, we were all interested in doing something we’d see ourselves (either in the past, present, or future) using. Hence, we went down the mobile indoor mapping route, finding enough features in here that would satisfy everyone. Then, we discussed the scope – quickly realizing that The Path wouldn’t be feasible (the first rule of Toronto is: you don’t commute to Toronto). So, we chose our campus; the Sheridan College, Davis Campus.
Finally, we talked technology;
Mobile as per current app trends – specifically Android based on team expertise
In it’s earliest stages, Google Indoor Maps was terribly limited – being literally black and white floor plans. WRLD3D directly supported route finding, a detailed POI handling system, not to mention having full 3D interactivity support that looks downright awesome.
This was how the Campus Navigator was born.
TEAM
People over ideas. Ideas come and go, a majority never bearing fruit, remaining as lost promises. People are forever; they’re living, breathing, and exist right here, right now. Give them a reason to, and they will stand with you through thick and thin long after your idea grows wings, or turns to dust.
Our success in capstone came from excellent teamwork, and I couldn’t have asked for better.
By volunteering as guinea pigs for this wild ride, I’ve developed a better understanding and appreciation for proper resource assignment and time management. At the same time, I hope that I managed to imprint some wisdom into you that’ll be carried on into your future endeavors.
The leadership, project management, and variety of development skills learned throughout this project will not be forgotten – and man, am I excited for the next opportunity to try them out.
DEVELOPMENT
Sometimes in a project, you find yourself wondering where the time’s gone. Then you realize you’ve been tracing lines in QGIS for the past hour.
When it came to development, my time was spent on two main facets of this project; the Discussion feature, and Map Design.
For Discussion, we’ve envisioned from the start a thread-like board not too dissimilar from Reddit, where a user can nest replies, as well as influence the score of thread replies via voting. Each point of interest (POI) of our map is to have its own ‘Post’, where users can have discussions by making top-level comments, then nest replies within each.
Users that are not logged in would see the same discussion, but would be unable to comment/rank replies until authenticated.
Realistically, authentication should be tied to students, and hence, would use an institutions’ SSO.
As a proof, we’ve utilized FireBase, which both manages discuss data storage, and user authentication via Google’s OAuth.
A solid half of the effort I’ve invested in this project went towards Map Design. Initially unfamiliar with the concept of geographical information systems (GIS).
Seriously. I’m a Software Developer. My city shaping skills were limited to Sid Meier’s Civilization V. Until this project, of course.
Perhaps our greatest adversary, our team fought time to grasp the essentials behind geographical design so that we can design indoor maps in QGIS.
Of this time, roughly 40% was spent on tracing the provided floor plans. QGIS’ built in Georeferencer helped tremendously; but could only help so much, since some building outlines were misshapen (as provided from WRLD3D), or just plain didn’t exist (in terms of Sheridan College, Davis Campus’ A-Wing), resulting in a warped output from Georeferencer that could not be reliably depended upon.
The rest of the time was primarily spent drawing straight lines, identifying key features (doors, walls, rooms) and attributing them.
These design tasks were one of the largest takeaways for me; having needed to use several different coordinate reference systems (CRS), as well as handle complex designs (honestly, this makes those ‘floor plans’ seen on HGTV look like chicken scratch). Perhaps most importantly, it helped me improve my spatial awareness and general geographical design mindset through several nights of lost sleep over “is this also considered hallway?”, and “would I realistically take this path from X to Y?”
After publishing, any time left was used to correct any misshapen areas and resolve compilation errors with WRLD3D.
PRESENTATION
Practice makes perfect. Unless you’re presenting. In that case, practice makes confidence, which makes perfect.
When it came down to presentations, Sheridan’s CST courses ensured we did. Lots.
Every group discussion session in CST2 was met with a “throw me an elevator pitch”. It only got more intense with formal presentations; first to our capstone session, then to a panel of venture capitalists, eventually leading towards the grand finale at the Sheridan College Capstone Showcase held at Trafalgar.
Capstone helped me realize something important in life; that presenting doesn’t have to be a chore. No, it’s not a ‘necessary evil’, and no, it doesn’t have to be a period of extreme anxiety and dread. Like talking with friends, a phone call, or even reading this post – it’s just another way we communicate.
Every day, we’re inherently presenting ourselves to the world. What changes, then, when you slap the label ‘presentation’, onto something? Usually, it’s something being put at stake; from a good grade to a career opportunity. However, we’re always putting something at stake when presenting – and that doesn’t necessary have to be something to fear.
We’re inherently social creatures slowly losing our ability to communicate confidently face-to-face by black text on a white screen. If there’s anything to fear, it’s this.
Thus; thank you Capstone for mock interviews, group discussions, elevator pitches, release candidates, VC pitches, and showcases. Like a good workout, the pain was there, but I’m leaving Sheridan stronger, and more presentable for it.
SHERIDAN
The journey was perilous. I supported my team to the best of my ability. It hurt to watch others fall without support of their own. Great ideas, and even greater students. Gone by December.
Now, I didn’t really see fit to include this category in this little reflection of mine, but it’s necessary. Especially if you’re reading this in preparation for your own capstone.
Capstone teams need more support from Sheridan College.
And by ‘Sheridan College’, I’m referring to everyone/everything outside of the capstone faculty – which were downright awesome, and perhaps some of the most inspirational professors I had the pleasure of working with (you know who you are, Simon, Geoff, John).
While Atlabs was successful, it was only through the hard work, planning, leadership, and dedication from each of its members. We’ve all given up something along the journey, some more than others. In some cases, literal blood, sweat, and tears went into it.
On the other hand… Some teams disbanded, their members dropping out of capstone all-together. Others had to completely drop their idea for another with a MONTH remaining in 2019. There were even those with industry partners who abandoned them mid-capstone.
While there may be extenuating circumstances for these, I personally blame Sheridan College for being at the core of them. However, that’s all in the past now. And while it’s left some lasting impressions on students, let’s talk about the future, and what can be done to make better impressions on future generations of students entering a Computer Technology related capstone.
The two pillars of support I’d like to see moving forward would be:
Monetary – Support us financially. The majority of our financial spending went towards our meets and presentations in the form of printouts, handouts, brochures, hardware rentals, etc. I’ve had the pleasure of talking with some very creative individuals with some outstanding ideas who were unable to carry through with them due to the lack of funding, either because they ran out of Azure credits, or they required equipment/technology that isn’t available/affordable to the average student.
Community – Capstone shouldn’t be an ‘Applied Computing’ event. It’s a SHERIDAN event. I’m serious. Students put an entire year’s worth of efforts, alongside their full-time studies, co-op, part-time jobs, and LIFE to work on this. This is a momentous occasion, and I was seriously disappointed by the lack of impression from Sheridan. All of Sheridan College’s faculty and student body should be aware of capstone, what it is, and why we do it. Essentially, capstone should be prestigious. That way, when a team such as ours has to inquire for the floor plans necessary for our solution, it doesn’t take 4 MONTHS for facilities to deliver (there’s really no excuse, snippets are used in a variety of places on/off campus). OR, when we present at the capstone showcase, students actually have an idea as to what’s going on, engage with teams, and understand the full weight of what we’ve accomplished (no, it’s not a homework assignment).
deep breath. Alright.
FUTURE OF CAPSTONE
Any practical industry project involves more than just ‘software’. There’s a variety of people involved from different professions, such as the engineers to work out the hardware, and business analysts that help predict how to insert into the market.
To make this a much more fulfilling, creative, and outstanding project; let’s see capstones bridged across programs, integrating students from various career paths.
Let’s take the following for instance, where there’s amazing potential in an IoT technology sector for mechanizing central air vents and controlling them with your thermostat. That way, you can effectively choose how warm/cold different parts of your home should be. Heck; let’s take it a step forward and integrate simple proximity sensors. Now we’ve got an automated system that can intelligently heat/chill rooms with occupants.
While engineering students may be able to mechanize air vents, they’ll be lost when it comes to the software necessary to interface and drive communication between these vents and some centralized air system. That’s where a SDNE student would kick in.
Not only would this be more realistic, but it would be a downright amazing learning experience for everyone involved. As for the quality of capstones? You’ll be shaping the next generation of innovators.
Bringing the world together to protect life in our waters, make fisheries and aquaculture more sustainable and equitable, and preserve our planet’s future.
The only thing better than eating fish is hacking them.
In the proverbial sense of course.
On the second week of February, I lead team ‘Finna hit a Fin’ at Hackernest’s annual Fishackathon held at Toronto city hall. Through a rigorous 28 hour non-stop development cycle, we managed to create and present a functional prototype of our solution; Infinifish.
Infinifish is a hand-held device that simplifies fish identification, meant to capture data from the fin, compare it against a known datasource, and provide an accurate identification of its species. For our prototype, we harnessed spectroscopy (color-sensing), as well as translucency data to identify a fish fin, and therefore, the fish.
Similar to a human fingerprint, fish fins are perhaps the most unique aspect of a fish. Our team wanted to take this idea further and create a simple, cheap ‘fingerprint’ scanner for fish.
Presented amongst the app-heavy crowd at Fishackathon Toronto, we discussed the modern relevancy of spectroscopy in light of all the AI and photo recognition technology. Quite simply; image/pattern recognition has its difficulties. These difficulties are being migitated via AI application to tech such as facial recognition, but until then, color pattern mapping via spectroscopy remains the affordable and simple solution.
Infinifish was a hit among the crowd with a demonstration involving live fin data, finishing the day at 2nd place, and was eventually presented to the faculty of applied science and technology (FAST) at Sheridan College.
Our solution is targeted primarily towards research and marine biology in hopes of automating currently manual identification methodologies, as well as recreational.
Infinifish is currently in development with effort being placed into enhancing accuracy via pattern matching, and will (hopefully!) be presented at the next Fishackathon in 2019.