Thursday, April 24, 2025

The Well-Designed Career: Engineering Success Beyond the Technical

After four years of hard work it’s graduation time and for many, time for that first professional paycheck! 

Across my career, I've witnessed tremendous technological advancements and workplace transformations. While technical skills remain fundamental, work-life balance is equally critical for long-term success and fulfillment. The following insights represent lessons I wish someone had shared with me when I began my journey in 1979 - wisdom that might help today's graduates navigate their careers most effectively.

  1. Establish boundaries early. The engineering field often glorifies long hours and constant availability, but this can lead to burnout. Setting clear work-life boundaries from the beginning of your career helps establish sustainable patterns.
  2. Navigate technological flexibility wisely. Remote work options have expanded significantly, offering greater flexibility but also blurring the line between work and personal time. Being intentional about disconnecting is crucial.
  3. Embrace non-linear career paths. Engineering careers often involve periods of intense work followed by more balanced phases. This natural ebb and flow means you shouldn't judge your entire career by any single moment.
  4. Set limits on continuous learning. The rapid pace of technological change requires ongoing education, but this shouldn't consume all your personal time. Negotiate for professional development during work hours when possible.
  5. Prioritize physical and mental health. Regular exercise, social connections outside work, and meaningful hobbies provide necessary balance to the analytical nature of engineering work.

After nearly six decades in engineering, these lessons highlight the importance of intentional boundaries, adaptability to changing work environments, acceptance of career fluctuations, balanced professional development, and holistic wellbeing practices. New graduates who implement these insights early can build rewarding careers that support both professional achievement and personal fulfillment throughout their working lives.

Congrats to the Class of 2025!

Friday, April 18, 2025

Evolving Engineering Education: AI's Impact on the Classroom

About six weeks into the current spring semester, I stepped in to teach an electromagnetics course when a professor from the Electrical and Computer Engineering Department at the University of Hartford needed to take emergency leave. Returning to teach this subject after six years has been eye-opening. The contrast between teaching methodologies in 2019 versus now reveals a significant transformation in engineering education—one largely driven by the integration of artificial intelligence tools into the classroom experience. 

My Teaching Journey

After serving three amazing and fulfilling semesters as a Visiting Professor at the University of Hartford, in September 2019 I moved to the Engineering Department at Holyoke Community College. There, I spent five more amazing and fulfilling years (2019-2024) teaching Circuits 1 and 2 courses to electrical engineering students who typically transfer to university programs like the excellent one at Hartford to complete their Bachelor of Science in Electrical Engineering (BSEE). These foundational classes at Holyoke, usually taken by second-year students, provide the essential groundwork compared to the more advanced electromagnetics course I've now returned to at Hartford.

What's fascinating is how the AI classroom revolution unfolded around me at Holyoke without my complete recognition. While teaching circuits courses day-to-day, AI tools were gradually integrating into my teaching—so incrementally that the transformation wasn't immediately obvious. It was only upon returning to teach electromagnetics at Hartford after six years that the dramatic contrast became apparent.

 

Problem-Solving Transformation

The traditional approach to electromagnetics problems—careful application of Maxwell's equations, vector calculus, and boundary conditions through meticulous manual calculations using advanced calculus—now exists alongside powerful AI alternatives that can generate solutions almost instantaneously.

In a recent electromagnetics classroom lecture, I worked through a standard homework problem using the conventional pencil-and-paper method, spending about 10 minutes to complete the derivation and solution. When I ran the same problem through Gemini AI, the contrast was striking. Within seconds, Gemini produced the correct solution, presented step-by-step with conceptual connections that enhanced understanding. However, I found that running the same problem multiple times through Gemini sometimes did not produce the correct answer, though the level of detail in the solution made it easy to identify the error. Gemini is just one of many and as these AI systems continue to improve, these errors will become less and less frequent.

 

Redefining Educational Focus

This technological shift is reframing the fundamental questions we ask in engineering education:

  • Instead of "How do we solve this problem?" we're increasingly asking "How do we interpret and verify these solutions?"
  • Rather than spending most of our time on calculation mechanics, we can focus on "What deeper insights can we gain from these results?"
  • The emphasis moves from computation to critical evaluation: "How do we assess the validity and limitations of AI-generated solutions?"

Finding Balance in Engineering Education

Despite these changes, foundational knowledge remains essential. Students still need to understand Maxwell's equations, boundary conditions, and vector analysis. The difference is that AI now serves as a powerful tool for exploration, verification, and extending understanding beyond textbook problems.

For today's engineering students, proficiency with AI tools is becoming as important as understanding the core principles of their discipline. They need to learn when to rely on their foundational knowledge, when to leverage AI assistance, and most importantly, how to critically evaluate AI-generated solutions.

 

Looking Forward

This unexpected return to teaching electromagnetics at Hartford after a six year gap has provided a unique vantage point to witness the evolution of engineering education. The combination of traditional engineering fundamentals with cutting-edge AI tools promises to produce graduates better equipped to tackle the complex technological challenges of tomorrow.

As educators, our role continues to evolve. We're no longer just teachers of technical content, but guides helping students navigate the increasingly AI-augmented landscape of engineering practice. This includes fostering the critical thinking skills needed to effectively collaborate with AI systems while maintaining the fundamental understanding that makes such collaboration meaningful.

Tuesday, April 15, 2025

AI Jobs in 2025: What Engineers Should Know

According to Stanford's latest AI Index Report, the demand for AI skills continues to grow in 2025. After a temporary slowdown, AI job postings have rebounded significantly, with positions requiring AI skills now representing 1.8% of all U.S. job postings, up from 1.4% in 2023.

Job Market Trends

The report, which analyzes data from LinkedIn and Lightcast (tracking over 51,000 websites), shows AI jobs are here to stay. Singapore leads globally with 3.27% of job postings requiring AI skills, followed by Luxembourg (1.99%) and Hong Kong (1.89%). The United States comes in at 1.75%.

Interestingly, adoption of AI coding tools like GitHub Copilot appears to be creating more jobs rather than eliminating them. According to LinkedIn economist Peter McCrory, companies using these AI assistants are actually increasing their software engineering hiring, though new hires typically require fewer advanced programming skills.


Shifting Skill Requirements

While Python remains the top specialized skill in AI job postings for 2023-2024, the broader skills landscape is evolving:

  • Generative AI skills saw nearly 4x growth year-over-year
  • Data analysis, SQL, and data science remain highly sought after
  • Most AI-related skills increased in demand compared to 2023
  • Only autonomous driving and robotics skills declined

McCrory notes that LinkedIn members "are increasingly emphasizing a broader range of skills and increasingly uniquely human skills, like ethical reasoning or leadership."


Workforce Impact and Concerns

Despite fears about AI eliminating jobs, the evidence is mixed. A McKinsey survey found 28% of software engineering executives expect generative AI to decrease their workforce in the next three years, while 32% anticipate growth. The overall percentage of executives expecting workforce reductions appears to be declining.


Diversity Challenges

A concerning trend is the persistent gender gap in AI talent. LinkedIn data shows women in most countries are less likely to list AI skills on their profiles, with males representing nearly 70% of AI professionals on the platform in 2024. This ratio has remained "remarkably stable over time," according to the report.


Academia vs. Industry

The report highlights how expensive AI training has shifted innovation from academia to industry. AI Index steering committee co-director Yolanda Gil noted: "Sometimes in academia, we make do with what we have, so you're seeing a shift of our research toward topics that we can afford to do with the limited computing [power] that we have."


Looking Forward

As AI tools become more integrated into workflows, the distinction between "AI jobs" and regular positions continues to blur. Success in this evolving landscape will likely require a combination of technical proficiency and uniquely human capabilities. The report emphasizes the importance of cross-sector collaboration between industry, government, and education to provide researchers with necessary resources and help educators prepare students for emerging roles in AI.


For engineers looking to stay competitive, developing a mix of technical AI skills (particularly Python and generative AI) while cultivating leadership and ethical reasoning capabilities appears to be the winning formula for 2025 and beyond.

Monday, April 14, 2025

Grokking: The "Aha!" Moment in Artificial Intelligence Podcast

A couple of robot friends discussed my last blog post on Grokking and my robot friends and I made it into a little over 10 min podcast.

Friday, April 11, 2025

Understanding Grokking In Artificial Intelligence

I’m doing some AI course development and the terms “grok” and ‘grokking” come up often. Here’s a short post on where “grok” came from and what it means.

 

Origin of "Grok"

The term "grok" comes from Robert A. Heinlein's 1961 science fiction novel "Stranger in a Strange Land." In the story, it's a Martian word meaning to understand something so thoroughly that the observer becomes unified with the observed. Computer programmers and AI researchers later adopted this term to describe deep, intuitive understanding as opposed to surface-level memorization—like the difference between knowing something intellectually and understanding it on a fundamental level.

 

What is Grokking?

Consider teaching a child to ride a bike. For weeks, they struggle with balance, fall repeatedly, and need constant support. Then one day—everything clicks! They're riding confidently as if they've always known how. This sudden transition from struggling to mastery mirrors what happens in AI systems.

Grokking describes when an AI system appears to suddenly "get it" after a lengthy period of seemingly minimal progress. Initially, the AI memorizes training examples without grasping underlying principles. Next comes an extended plateau where performance improvements stall. Finally, a breakthrough occurs where the AI demonstrates genuine comprehension of the pattern.

 

The Multiplication Analogy

Take a child learning multiplication. At first, they might memorize that 7×8=56 as an isolated fact. They can answer "What is 7×8?" correctly but struggle with related problems like "What is 8×7?" or word problems requiring multiplication concepts. This mirrors early AI training, where the system correctly predicts outcomes for examples it has seen but fails at novel situations requiring the same underlying principle. The AI hasn't yet "grokked" multiplication—it has merely memorized specific input-output pairs.

With continued learning, the child begins to recognize that multiplication represents repeated addition, that it's commutative (7×8=8×7), and can visualize it as an array. Eventually, they develop number sense that allows them to solve unfamiliar problems by decomposing them (7×9 might be solved as 7×10-7).

Similarly, when an AI system "groks" a concept, it doesn't just memorize training examples but discovers the underlying relationships. It can generalize to unseen problems and demonstrate flexible application of knowledge. The difference is qualitative, not just quantitative—the AI has moved from rote recall to genuine comprehension.

 

Significance in Machine Learning

This grokking phenomenon challenges several conventional assumptions in machine learning. Traditional learning curves show rapid improvement early in training that gradually levels off—suggesting diminishing returns with additional training. But grokking reveals a more complex reality.

In traditional understanding, machine learning models follow a fairly predictable pattern: they learn quickly at first (capturing the "low-hanging fruit" of obvious patterns), then improvement slows as the model approaches its capacity. This view suggests that if performance plateaus for a significant period, further training is likely wasteful. Grokking challenges this by revealing that even during apparent plateaus, crucial but subtle reorganization may be happening within the model. What looks like stagnation might actually be the model exploring the solution space, discarding overfitted memorization in favor of simpler, more generalizable rules.

 

Memorization vs. Generalization

This distinction between memorization and generalization is central to understanding grokking. Early in training, models often achieve good performance on training data through memorization—essentially creating a complex lookup table rather than learning underlying patterns. This explains why neural networks can sometimes perfectly fit random noise in their training data.

During the grokking process, something remarkable happens: the model appears to transition from complex memorization strategies to simpler, more elegant solutions that capture the true rules governing the data. Researchers have observed that when grokking occurs, the internal weights of the neural network often become more organized and sparse—suggesting the model is discovering fundamental structures rather than storing arbitrary associations.

 

Implications for Model Evaluation

This has profound implications for how we evaluate machine learning models. Test accuracy alone may not reveal whether a model has truly "grokked" a concept or merely memorized training examples. A model might perform well on test data that's similar to training data while failing catastrophically on more novel examples.

True generalization—the hallmark of grokking—often requires evaluating models on systematically different distributions or conceptually more challenging examples. For instance, a model might correctly classify images of cats it has seen before without understanding the abstract concept of "catness" that would allow it to recognize unusual cats or drawings of cats.

This behavior mirrors phase transitions in physical systems—like water gradually heating until it suddenly transforms into steam. Training an AI resembles finding the lowest point in a complex landscape. Simple, generalizable solutions often hide in deep valleys that require time to discover, and the system must explore numerous suboptimal paths before finding the optimal one.

 

Implications for AI Development

Grokking suggests that advanced AI might require not just more data or computing power, but also greater patience—allowing systems to train until they experience their "aha!" moment. It reminds us that learning—for both humans and machines—isn't always linear or predictable. Sometimes the most significant breakthroughs emerge after prolonged periods where progress appears stagnant.

Monday, April 7, 2025

AI in Primary Care: A Problem-First Approach

Last week I wrote about my search for a new primary care physician. Based on my medical physical exam experience and my involvement in the development and teaching of a couple of Artificial Intelligence (AI) courses, thoughts automatically went to where they seem to go a lot these days “Why not use AI?” So I did a little research. Bottom line – it still has a ways to go.

In a recent special report in the Annals of Family Medicine titled AI in Primary Care, Start With the Problem, Dr. John Thomas Menchaca argues for a strategic approach to implementing AI in primary care. Rather than pursuing AI for its own sake, physicians and developers must first identify the right problems to solve. Dr Menchaca compares misguided AI implementations to the Segway—a technological marvel that failed because it didn't address real needs. In contrast, electric scooters succeeded by solving the specific "last mile" problem in urban commuting. Similarly, AI must target precise pain points in healthcare.

Primary care's most pressing issue isn't clinical complexity but time management. Studies reveal full-time primary care physicians work over 11 hours daily, with more than half spent on electronic health record (EHR) tasks—a workload directly linked to high burnout rates. This data provides a clear roadmap for effective AI implementation. The most time-consuming EHR tasks include documentation (the largest time sink), chart review, medication management (which could save up to 2 hours daily based on studies with pharmacy technicians), triaging laboratory results, managing refills, responding to patient messages, and order entry.

Current AI documentation tools show mixed results. Many generate rough drafts requiring substantial editing, sometimes taking as much time as writing notes from scratch. This mirrors issues with traditional clinical decision support tools, which often increase rather than decrease workload. The challenge is developing AI that genuinely saves time in clinical settings by integrating seamlessly into workflows, minimizing oversight requirements, empowering team members to resolve issues independently, and measuring impact through time-saving metrics.

Dr Menchaca calls for academic medicine to bridge the gap between clinicians and developers through partnerships at national conferences, research focused on root causes of inefficiency, detailed workflow analyses, and implementation in organizations that truly prioritize clinician well-being. A key concern is that time saved by AI might simply be filled with additional work—more patients or administrative tasks—highlighting that technology alone cannot fix systemic issues in primary care delivery.

AI won't magically solve problems like overwhelming patient panels or overloaded schedules. As Dr Menchaca notes, "AI is just one tool—a means to an end, not the end itself." Meaningful solutions must ultimately lighten clinicians' workloads. By targeting specific, high-impact areas and measuring success through time saved, AI can contribute to a more sustainable future for primary care.

The message for AI innovators is clear: solve real problems, save real time, and keep the clinicians central to your design process. Only then can AI fulfill its potential to transform primary care by making existing systems work more efficiently rather than attempting to reinvent them.

And of course a disclaimer: I’m just a patient, not a doctor. I have done a lot of industry specific development work over the years though. From a development perspective, Dr Menchaca's approach sure makes a lot of sense.

Friday, April 4, 2025

In Search Of A New Primary Care Physician

[Image AI Created]
My 80-year-old primary care doctor is retiring, forcing me to find someone new. A huge shoutout to Dr William Mugg in South Hadley, Massachusetts who got me this far and who always conducted thorough physical exams, checking my throat, ears, heart and body with careful attention.  

I’ve been shopping for a replacement and found a highly rated doctor in his 40s. My first physical with the new guy was surprisingly different. The initial meeting was excellent - he showed genuine interest in connecting and made me feel comfortable and valued. Things changed though when the exam started, he spent most of the time looking at a laptop asking me questions from a list, clicking answers into an electronic health record (EHR) system and ordering lab work, with minimal physical examination. 

 

What has changed?

I recently read an IEEE Spectrum article titled "The Doctor Will See Your Electronic Health Record Now" that perfectly described this shift. The article explains that today's doctors spend up to 6 hours daily on documentation with only 27% of their time actually interacting face-to-face with patients. While I left with comprehensive lab orders, I missed the hands-on approach that made me feel thoroughly examined and cared for by Dr Mugg. Here’s some background on how things have changed.

 

Looking Back

In 2004, President George W. Bush set an ambitious goal for U.S. healthcare providers to transition to EHRs by 2014, with the vision that a person's complete medical information would be available "at the time and place of care, no matter where it originates." This initiative gained significant momentum with the 2009 HITECH Act, which budgeted $49 billion to promote health information technology.

 

Today

Fast forward to today, and the U.S. has spent over $100 billion on healthcare IT. While nearly 80% of physicians and almost all non-federal acute-care hospitals now use EHR systems, the implementation has fallen short of its original vision in several key ways according to the article:

  • Poor Usability: EHR systems rank in the bottom 10% of IT products for usability. Physicians spend between 3.5 and 6 hours daily on documentation, with only 27% of their time spent face-to-face with patients.
  • Clinician Burnout: 71% of physicians feel EHR systems contribute to burnout, with half of U.S. physicians experiencing burnout symptoms.
  • Limited Interoperability: Despite progress, sharing patient data between different systems remains difficult. An average hospital uses 10 different EHR vendors internally, and many specialists have unique systems that don't communicate with each other.
  • Cybersecurity Concerns: Security was largely an afterthought in early implementations. From 2009 to 2023, nearly 5,900 healthcare breaches exposed over 520 million health records, with the average 2024 breach costing $9.97 million.
  • Productivity Drain: Rather than increasing productivity as promised, EHRs have become "productivity vampires" that require extensive documentation and management.

Improvement Needed

EHR systems have transformed patient notes from thoughtful clinical assessments into bloated documentation driven by billing requirements and compliance metrics. Patient notes are now twice as long as they were 10 years ago, yet often contain less meaningful clinical insight. The focus has shifted from holistic patient care to satisfying digital requirements and insurance codes. The IEEE Spectrum article quotes Robert Wachter, Chair of Medicine at UCSF, who states that EHRs "became an enabler of corporate control and outside entity control" over medical practice. Many physicians now practice "checkbox medicine" to satisfy EHR protocols rather than applying the nuanced clinical judgment that characterized previous generations of doctors.

 

Perhaps most concerning is how EHR implementation has contributed to unprecedented levels of physician burnout. Emergency room doctors, who make approximately 4,000 EHR clicks per shift, report the highest burnout levels. This exhaustion inevitably impacts patient care. While old-school physicians like Dr Mugg could focus their energy on clinical reasoning and patient interaction, today's doctors must divide their attention between patients and an increasingly demanding digital ecosystem.

 

Looking Forward

While technological advances including AI scribes (robots) may eventually ease documentation burdens, the fundamental shift away from hands-on medicine represents a significant loss in healthcare delivery. The art of the physical exam—once the cornerstone of medical practice—risks becoming a secondary consideration in our digital healthcare environment.

 

As we continue to digitize healthcare, we must ask whether we've sacrificed something essential in the process: the irreplaceable value of a doctor who looks at the patient rather than the screen.

 

And finally – thanks again Dr Mugg. Enjoy your retirement!