
Charles Jenkins
Graphic Design and Tech Portfolio
AI ML Motion Recognition for American Sign Language
While attending a baseball game with friends I encountered a ticket seller who only communicated through sign language. As a sign language student, I recognized a need for accessible sign language learning tools. I developed an AI Machine Learning(ML) tool using body and facial skeletal motion tracking through a webcam to teach sign language. It provides feedback on the correctness of signs. This experience inspired me to continue improving AI and helping people.
I believe AI ML can facilitate learning, especially for those unable to access traditional sign language instruction. My goal is to eventually expand this tool's use by sharing it among teachers and their students for training on the diverse motions for signs across multiple schools. Machine Learning of sign language can improve if one provides lots of training videos. The act of training specific signs and the use of the recognition product can be woven into sign language curricula. The benefits include language acquisition, cultural exploration, and better communication with the deaf community.
In case of mistakes, the AI ML suggests the correct sign by percent accuracy, promoting continuous learning and refinement. Creating this tool with AI ML was a source of pride, and it highlighted the potential of AI ML for a positive impact.
Through this project, I grew as a person, demonstrating dedication to learning and problem-solving. I sought help from AI ML experts who posted training videos and Pytorch code on YouTube (Nicholas Renotte) as well as my father to resolve challenges. I realized at an early stage the growing interest and potential in AI ML solutions for enhancing the lives of humans. This project affirmed my ability to innovate with research and dedication.
In the above video the training engine displays a sign that the user must complete for the training exercise. Each sign must be performed repetitively. Over time we learned that a combination of dropping one’s hands to the side between repetitions, as well as keeping them in place between repetitions made the model more accurate. Utilizing different people to perform the signs also helped improve the accuracy.
In the video above the Python Pytorch code is recognizing the points on the face, hands, arms, shoulders and fingers. The ML identifies the motions of these points from frame to frame of the video. The signs are recognized and printed at the top of the screen with percentage accuracy. The accuracy improved dramatically the more we trained the model however repetitive training became tiresome so it was imperative that we had my family members help me train too.
HapyDesigns.com
My HapyDesigns site started out as a charity project and later evolved into a merchandise site that I modified for the Commercient company. Today the company’s customers, partners, and staff purchase hoodies, t-shirts, hats, stickers for a fee or with a promo code. I integrated multiple 3rd party vendors such as printful for the on demand manufacturing and drop shipping, and squarespace for the website and payments engine. You can take a look at all of the products by clicking the shop link in the top left of this webpage.


Scripted Generative Art NFTs
Take a look at my MostValuedPigs collection of 440 uniquely generated pigs on OpenSea. These pigs were generated by hand with multiple layers of features using vector artwork. Each element (clothing, eyes, mouth, backgrounds, hats) was generated with multiple varieties and weighted with rarity. The elements were mixed together to generate 440 pigs each carrying a different rarity and thus a different collectable value. The pigs were published on OpenSea as NFTs for sale. Apart from the vector line drawing of the elements themselves, a software script from HashLips was used and modified to work for my needs to generate the layered artwork, the metadata files, and minting of the NFTs on the Polygon blockchain.





In my graphic design classes with Mr Ryan I worked with many software packages and applied a variety of digital art skills. Using PhotoShop, Illustrator, and InDesign I worked on prompts from my teacher to create visual design projects to replicate a real world needs. The baseball photo was created for a friend to announce his acceptance to play for a college team on his social media.