1.25″ Telescope Eyepiece & Adapter Caps

Nothing much to say here, didn’t feel justified spending $10+ just for some plastic caps to protect my telescope from collecting dust and my eyepieces from scratching.

1.25in-adapter-cap

STL (for 3D printing)

IPT (Autodesk Inventor part file)

1.25in-eyepiece-cap

STL (for 3D printing)

IPT (Autodesk Inventor part file)

Zoom, enhance, zoom some more.

Not long ago I had the privilege of meeting Speo, creator of MagicDSC and overall astronomy hobbyist/fanatic, in-person. Thanks again for the crash course on sky watching – I look forward to fiddling with your project more while embracing the hidden beauty of the night sky.

Suffice to say, I’m now the owner of a 6″ Dobsonian SkyWatcher.


I’ve been fascinated with the night sky for the longest time considering the amount of time I spend awake during it from my abysmal sleep schedule.

I thought it’s about time I tapped into a proper tool to observe its many mysteries, and here we go.

Moving forward, I’ll be documenting my journey with this cool piece of tech under #astronomy – including the following on how I improved a cheap red dot viewfinder with an old ball point pen.


Tweaking a Red Dot ViewFinder

While super useful and affordable, I noticed it had quite a bit of wobble in both the X and Y.

You can easily fix this using a few small spacers, but I didn’t want to have to disassemble it again and remove a spacer if I need to re-align it.

For that reason, I ended up stealing the spring from a non-functional ball point pen and cutting it into small segments that I put between the X and Y knobs and the viewfinder frame. After doing that, the wobble was pretty much non-existent.

You can also re-position the spring between the head of the screw and viewfinder body assuming you find yourself calibrating -X/-Y more than +X/+Y.

I can now release tension on each axis without having to adjust how many spacers I use to align it.

My Kubernetes Quickstart Workshop Experience

Last week I held a Kubernetes Quickstart Workshop as part of the TOHacks 2022 Hype Week.


Together, we went over some basic Kubernetes concepts, API resources, and the problems they solved relative to non-cloud native architecture.

As part of this workshop I created some simple K8s cheat-sheets/material, along with a whole new Kubernetes cluster which I exposed as part of a hands-on lab.

Participants, which included high school/post-secondary students, found the workshop pretty cool. I’ve heard feedback post-workshop that their friends thought they were hacking as they used kubectl.

Kids, kubectl responsibly.

I also noted feedback that more detail in certain topics would’ve been great, which I would’ve definitely wanted. I barely managed to get through everything I wanted to in 60 minutes 🙁
The primary goal was to explore common questions/problems one hosting an application may encounter, and how Kubernetes can address them.

All in all, it was a great experience that I look forward to doing more often – not particular to Kubernetes, but cloud native architecture in general. As a cloud engineer, I’ve learned the hard way that this is anything but straight-forward, and I hope others can learn from my struggles as they embark on their journeys in this space.

Workshop aside, I wanted to chat a little about the lab environment I created since I got questions post-lab from other Kubernetes aficionados.

I’ve been running Tanzu Community Edition on my home lab since its early days, both as a passion project having contributed to it, and because it integrates well with my home vSphere lab. As such, I already had a management cluster serving as my ‘cluster operator’, which I use to create/destroy clusters on the fly. This was no different, with the exception of creating a user fit for this public lab.

How I created a Kubernetes Lab in 15 minutes using Tanzu Community Edition

I should probably preface this by saying that this is NOT something I would encourage for any kind of persistent environment.

While this might be less dangerous if it was behind a VPN, the cluster we’re about to create has the risk of leaking important vSphere configuration data, along with the credentials of the user that cluster resources will be created under. For example, if a user can fetch vsphere-config-secret from kube-system namespace, consider your vSphere environment compromised.

This process assumes the following:

  1. You have an existing management cluster created
  2. Your Tanzu cluster config VSPHERE_CONTROL_PLANE_ENDPOINT resolves to some internal IP that’ll be used for your workload cluster API, and resolves to your public IP externally
  3. You can port-forward your cluster API (probably better ways, will talk about this later)

Deployment

Step 1. Perform the following command using your cluster config on your Tanzu bootstrap host.
Will also be the longest step depending on your hardware.

tanzu cluster create open --file ./open.k8s.alvinr.ca -v 9
AVI_CA_DATA_B64: ""
AVI_CLOUD_NAME: ""
AVI_CONTROL_PLANE_HA_PROVIDER: ""
AVI_CONTROLLER: ""
AVI_DATA_NETWORK: ""
AVI_DATA_NETWORK_CIDR: ""
AVI_ENABLE: "false"
AVI_LABELS: ""
AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_CIDR: ""
AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_NAME: ""
AVI_PASSWORD: ""
AVI_SERVICE_ENGINE_GROUP: ""
AVI_USERNAME: ""
CLUSTER_CIDR: 100.96.0.0/11
CLUSTER_NAME: open
CLUSTER_PLAN: dev
CONTROL_PLANE_MACHINE_COUNT: "3"
ENABLE_AUDIT_LOGGING: "false"
ENABLE_CEIP_PARTICIPATION: "false"
ENABLE_MHC: "true"
IDENTITY_MANAGEMENT_TYPE: none
INFRASTRUCTURE_PROVIDER: vsphere
LDAP_BIND_DN: ""
LDAP_BIND_PASSWORD: ""
LDAP_GROUP_SEARCH_BASE_DN: ""
LDAP_GROUP_SEARCH_FILTER: ""
LDAP_GROUP_SEARCH_GROUP_ATTRIBUTE: ""
LDAP_GROUP_SEARCH_NAME_ATTRIBUTE: cn
LDAP_GROUP_SEARCH_USER_ATTRIBUTE: DN
LDAP_HOST: ""
LDAP_ROOT_CA_DATA_B64: ""
LDAP_USER_SEARCH_BASE_DN: ""
LDAP_USER_SEARCH_FILTER: ""
LDAP_USER_SEARCH_NAME_ATTRIBUTE: ""
LDAP_USER_SEARCH_USERNAME: userPrincipalName
OIDC_IDENTITY_PROVIDER_CLIENT_ID: ""
OIDC_IDENTITY_PROVIDER_CLIENT_SECRET: ""
OIDC_IDENTITY_PROVIDER_GROUPS_CLAIM: ""
OIDC_IDENTITY_PROVIDER_ISSUER_URL: ""
OIDC_IDENTITY_PROVIDER_NAME: ""
OIDC_IDENTITY_PROVIDER_SCOPES: ""
OIDC_IDENTITY_PROVIDER_USERNAME_CLAIM: ""
OS_ARCH: amd64
OS_NAME: photon
OS_VERSION: "3"
SERVICE_CIDR: 100.64.0.0/13
TKG_HTTP_PROXY_ENABLED: "false"
VSPHERE_CONTROL_PLANE_DISK_GIB: "40"
VSPHERE_CONTROL_PLANE_ENDPOINT: k8s.alvinr.ca
VSPHERE_CONTROL_PLANE_MEM_MIB: "16384"
VSPHERE_CONTROL_PLANE_NUM_CPUS: "4"
VSPHERE_DATACENTER: <>
VSPHERE_DATASTORE: <>
VSPHERE_FOLDER: <>
VSPHERE_NETWORK: <>
VSPHERE_PASSWORD: <>
VSPHERE_RESOURCE_POOL: <>
VSPHERE_SERVER: <>
VSPHERE_SSH_AUTHORIZED_KEY: |
    ssh-rsa <>
VSPHERE_TLS_THUMBPRINT: <>
VSPHERE_USERNAME: <>
VSPHERE_WORKER_DISK_GIB: "40"
VSPHERE_WORKER_MEM_MIB: "8192"
VSPHERE_WORKER_NUM_CPUS: "4"
WORKER_MACHINE_COUNT: "3"

* items denoted with <> have been redacted

Step 2. Retrieve admin credential, you might want this later (plus, we’ll steal certificate authority data from this later).

tanzu cluster kubeconfig get open --admin --export-file admin-config-open

Step 3. Generate key and certificate signing request (CSR).
Here, we’ll prepare to create a native K8s credential.
Pay attention to the org element, k8s.alvinr.ca, this will be our subject to rolebind later.

openssl genrsa -out tohacks.key 2048
openssl req -new -key tohacks.key -out tohacks.csr -subj "/CN=tohacks/O=tohacks/O=k8s.alvinr.ca"

Step 4. Open a shell session to one of your control plane nodes.
Feel free to do so however you want, my bootstrapping host had kubectl node-shell from another project so I ended up using this.
Copy both the key and CSR into this node.

Step 5. Sign the CSR.
My lab life was for the duration of TOHacks 2022 Hype Week, so 7 days validity was enough.

openssl x509 -req -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -days 7 -in tohacks.csr -out tohacks.crt

Step 6. Encode key and cert in base64, we’ll use this later to create our kubeconfig.
Copy these two back to your host.

cat tohacks.key | base64 | tr -d '\n' > tohacks.b64.key
cat tohacks.crt | base64 | tr -d '\n' > tohacks.b64.crt

Step 7. Build kubeconfig manifest.
Here’s an example built for this lab, this is what I distributed to my participants.
We didn’t bother going over importing this credential to their kubeconfig path, we just provided it manually to every command via the --KUBECONFIG flag.

---
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: <ADMIN_KUBECONFIG_STEP_2>
    server: https://k8s.alvinr.ca:6443
  name: public
contexts:
- context:
    cluster: public
    namespace: default
    user: tohacks
  name: tohacks
current-context: tohacks
kind: Config
preferences: {}
users:
- name: tohacks
  user:
    client-certificate-data: <tohacks.b64.crt_STEP_6>
    client-key-data: <tohacks.b64.key_STEP_6>

Step 8. Create a Role.
Here is where we define what our lab participants can do. In this case, I created a namespace-scoped (default) granting them a pseudo reader access (they can only use get/watch/list API).

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: default
  name: tohacks
rules:
- apiGroups: ["*"]
  resources: ["*"]
  verbs: ["get", "watch", "list"]

Step 9. Create a RoleBinding.
Here is where we define our subject, in this case the org used in the signed cert file from the CSR generated in Step 3.

---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: tohacks
  namespace: default
subjects:
- kind: Group
  name: k8s.alvinr.ca
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: tohacks
  apiGroup: rbac.authorization.k8s.io

Step 10. Deploy roles.

kubectl apply -f tohacks-role.yaml
kubectl apply -f tohacks-rolebinding.yaml

Step 11. Expose K8s API (port 6443).
Finally, the part I disliked the most, exposing my cluster API for the host IP defined in Step 1., VSPHERE_CONTROL_PLANE_ENDPOINT.

I’m sure there’s lots of better ways to do this, another one that occurred to me was sitting it behind some reverse proxy like nginx since K8s API traffic is still standard HTTP.

Tear-down

When TOHacks 2022 concluded, the lab was swiftly sent to the void it came from.

tanzu cluster delete open

TOHacks 2022 — Summer’s around the corner, time to get your hacks in order.

You like code, we like code. Let’s write some sweet, sweet code — together.

TOHacks 2022 — Toronto’s foremost hackathon will be closing applications on May 23rd.

Now is the time to register for a chance to win some cool Apple swag, an opportunity to build a wicked awesome solution and a guarantee to learn something new.

Leading up to the hackathon (May 24th — 27th), we’ve got several workshops and activities dabbling in new tech and topics dubbed Hype Week. Who said that developers were boring sticks in the mud?

All of this builds up towards the grand finale — the TOHacks 2022 hackathon happening on the last weekend of May (28th-29th). Dive into 24 hours of coding mayhem and even more workshops as you develop an award-winning project.

Cross-posted from Medium:
https://medium.com/@TOHacks/tohacks-2022-summers-around-the-corner-time-to-get-your-hacks-in-order-93199b7bf7c0

Let’s talk mods. Microsoft’s Xbox Game Studios acquires Activision Blizzard

The $68.7 billion acquisition of Activision Blizzard now places Microsoft as the 3rd largest gaming company behind Tencent and Sony; setting in stone once and for all the significance of Microsoft Gaming. With huge franchise titles such as Call of Duty, Overwatch, and World of Warcraft now under their belt, we can expect the next few years to be very exciting. Looks like we’ll see more utility from Xbox Gaming besides as a launcher for Halo and Fortnite.

This isn’t the first of its kind from Microsoft Gaming, nor would it be the last. Some of the most notable being Mojang, the studio behind sandbox survival classic Minecraft back in 2014, and ZeniMax in 2019 which included Bethesda Game Studios — creators of The Elder Scrolls and Fallout franchises. Microsoft is taking huge steps in conquering mediums of gaming from mobile to PC and console, instilling hope for new and innovative releases that’ll keep us busy in the lock-downs to come (just kidding, I hope…).

Everyone knows that acquisitions can be scary, and the truth is they always are. Not because terrible things happen — which they very well could in the case of Lionhead Studios — but because of the uncertainty and unknowns that developers and fans must now embark on.

Despite this, time and time again, gaming communities have proven themselves to persist long after the lives of their creators. Take FreeSpace 2 for example, released in 1999 and still alive and kicking courtesy of passionate modders and sci-fi enthusiasts at Hard Light Productions. 23 years later and you can still find its beloved mod manager Knossos and many other mods/add-ons still in active development.

Modding communities breathe a special kind of life into games, which is often times why competent studios respect and cherish them. It’s no easy feat to find time to decompile code (nor is it always legal) and build extensions to existing game functions that have been desired by fans. There’s even extreme cases like DayZ and Arma II where a mod transcends its source and becomes a game of its own.

In spite of this, creators don’t see this as threats to their existing development effort. Rather, it’s a sign of success, for every community made and mod developed brings that title one step closer to becoming an idea, an experience, that will stand the test of time.

But, can we expect the same open-mindedness from Microsoft and these now Xbox Game Studio titles?

Cross-posted from Medium:
https://medium.com/@TOHacks/lets-talk-mods-microsoft-s-xbox-game-studios-acquires-activision-blizzard-ec5716c1a661

Registering a VM created by a slightly newer version of ESXi

Related to the previous post, I’ve downgraded one of my ESXi hosts to ESXi 7U1c in a fit of frustration and sleep deprivation.

Now comes the headache of re-configuring this host, along with registering VMs that were created by a newer version of ESXi 7. Specifically, 7U2.

Et eu, VMs?

Now I didn’t really use any special 7U2-specific configuration for my VMs, so let’s go ahead and cheat our way into mutating their VMX configs so we can register them.

  1. Remove invalid VM from inventory
  2. Enable SSH on ESXi Host
    Lots of guides online for this already – use your favourite method, via vCenter, via ESXi web UI, via DCUI, etc.
  3. Open a vi session to your VM config file
    Assuming you’ve got your VMs tucked away in some datastore located in /vmfs/volumes, go ahead and do a vi /vmfs/volumes/<DATASTORE_NAME>/<VM_NAME>/<VM_NAME>.vmx
  4. Edit the virtualHW.version
    In my particular case, I dropped it from 19 to 18.
    List of virtualHW versions: https://kb.vmware.com/s/article/1003746
  5. Register VM

VMware ESXi 7U2 – Host losing access to SD card

Since Fall of 2021 when I upgraded from ESXi 7U1c to 7U2a, I’ve noticed several times when my host would lose access to its internal SD card. My setup is pretty standard – Dell’s custom ESXi image installed on my R830’s internal dual SD module (IDSDM).

After around 16-20 days of uptime, the host would lock up – API would be unresponsive for ESXi (busted vmware-hostd and/or vmware-vpxa services?) meaning that any attempt to manage VMs via ESXi web UI would timeout and/or sit there indefinitely, along with any attempts via esxcli. Hosts managed by vCenter would timeout for any dispatched vCenter jobs, with similar UI symptoms as above.
Some of these include:

  • Unable to manage power state of VMs (operation pending infinitely)
  • Unable to vMotion/migrate VMs
  • Web/remote console does not work

Some host functionality is persisted, such as managing host services and enabling SSH, but it’s very limited what you can do at this point.

Fortunately, running workload (VMs, vApps, Pools) seem to be unaffected.

What’s going on?

This is a known issue as of VMware KB #2149257 where high frequency read operations to ESXi’s SD card (either single, or IDSDM) causes ‘SD card corruption’.

This is attributed to a new partition schema, where ESXi’s scratch partition located on the same media experiences high I/O. Hence why in the same KB, they suggest the usage of Ramdisk as a VMware Tools repository.

There are other theories as well which hint at a bug in the vmkusb driver, an issue VMware engineers are still looking into.

It’s important to note that this has only been observed for those deploying ESXi to an SD card, IDSDM, or crappy flash drives. Those that have deployed ESXi to a disk (HDD/SSD) have not experienced this issue.

You can follow along with the VMware community discussion here.

What can I do?

You’ve got a few options, all of which have varying success and suck in production.

Ordered from safest to outage:

  1. Restart vmware-hostd and vmware-vpxa services, which seem to re-establish connectivity to the ESXi filesystem on your SD card.
  2. Move scratch partition to disk or datastore, and use Ramdisk for vmware-tools (done automatically in ESXi 7U3).
  3. Downgrade to ESXi 7U1c (last known 7.0 version before these issues)
  4. Export an ESXi config bundle backup, re-install ESXi, and restore from ESXi config bundle.

Depending on whether you’re experiencing one of the two aforementioned issues, YMMV.

How I temporarily remediated

Until we see this resolved 100% in the future, I’ve outlined a remediation plan based on feedback by other awesome individuals in the VMware community which should address both faults.

1. Stop vmware-hostd and vmware-vpxa

/etc/init.d/hostd stop
/etc/init.d/vpxa stop

2. Wait 60 seconds, attempt unload of vmkusb, and wait another 60 seconds

sleep 60
vmkload_mod -u vmkusb
sleep 60

3. Start vmware-hostd and vmware-vpxa

/etc/init.d/hostd start
/etc/init.d/vpxa start

After a moment you should find your host is once again responsive.

4. Enter maintenance mode

esxcli system maintenanceMode set --enable true

You can also check maintenanceMode state, useful after the last step.

esxcli system maintenanceMode get

5. Enable Ramdisk for VMware Tools repo

esxcfg-advcfg -A ToolsRamdisk --add-desc "Use VMware Tools repository from /tools ramdisk" --add-default "0" --add-type 'int' --add-min "0" --add-max "1"

6. Reboot host

reboot

7. Exit maintenance mode

esxcli system maintenanceMode set --enable false

The TOHacks 2021 Digest

2021 was an eccentric one, riddled with changes from the widespread indoctrinate of remote work and study programs to virtual everything. Meetings, workshops and hackathons are but a few things we have had to deliver 100% digitally.

In spite of all this, we were not deterred. Our TOHacks 2021 hackathon ended up being one of our most successful virtual hackathons to date with over 700 participants and 190+ projects. We then followed up with TOConnect 2021 which delivered dozens of workshop sessions to foster the leaders of tomorrow.

This would not have been possible without the support of inspiring individuals dedicated to hacking a better future, our sponsors for their financial support and the many TOHacks volunteers that worked tirelessly in the background.

Here is a peek at some of the lesser known but equally grand projects our teams have been cooking in the background, along with their aspirations for the new year.

Cross-posted from Medium:
https://medium.com/@TOHacks/the-tohacks-2021-digest-7f00e00948bf

Collaborative hacking with Visual Studio Live Share

When hacking in a team, there are always some problem that devolves into peer programming. In a physical Hackathon, this is but a chair roll away. When virtual, your options become more limited. Either one person shares their screen at 5 frames per second over Zoom, or git is involved in ways it definitely should not be.

Introduced as an extension in 2018, Microsoft Visual Studio Live Share allows individuals to share a collaborative, live integrated development environment (IDE) like a Google doc. It also facilitates sharing of terminal sessions, web servers, and has integrated voice calling. All of this is available at no extra charge, irrespective of Visual Studio license, and works across both Visual Studio and Visual Studio Code.

You will encounter many peer-programming scenarios in your Hackathon journey — especially when virtual. Let us explore some common ones and how Live Share can help your team work as effectively and efficiently as possible — after all, the clock’s ticking.

Cross-posted from Medium:
https://medium.com/@TOHacks/collaborative-hacking-with-visual-studio-live-share-b1a9dd743c8a