Tuesday, April 28, 2026

From Outputs to Outcomes: Mastering the "Usage First" Design Principle (Article Review)

From Outputs to Outcomes: Mastering the "Usage First" Design Principle (Article Review)

The Core Philosophy: Usage First, Implementation After

In the career of an architect, there is a transformative "aha moment" where disparate coding tricks and design patterns coalesce into a single, strategic framework. For Jonathan Boccara, this synthesis was triggered by a guest post from Miguel Raggi and reinforced across three distinct, high-pressure projects. This realization led to the "Usage First" mindset—a fundamental shift from technical construction to purposeful design.

The Usage First Principle: Prioritize the interface of the consumer over the convenience of the provider. Design the output, the interface, or the experience first—assuming any underlying implementation is possible—before determining technical constraints or storage requirements.

By adopting this architectural discipline, project leaders secure three critical business advantages:

  • Higher User Retention and Conversion (Happy Users): By defining the ideal experience without initial technical bias, you identify the "awesome ideas" that drive engagement. Implementation costs are addressed only after the value proposition is solidified.
  • Reduced Total Cost of Ownership (Expressive Code): When developers design the "call site" before the function logic, the resulting code is naturally more expressive and readable. This clarity reduces long-term maintenance overhead and technical debt.
  • Accelerated Time-to-Market (Faster Development): Starting with the end usage defines the most "convenient format" for data. This eliminates the "guesswork" of schema design, ensuring that development effort is never wasted on storage structures that fail to support the final query.

This philosophy is the most effective way to navigate the complexities of designing data-driven systems.

--------------------------------------------------------------------------------

The Scenario: The Lake Annecy Boat Rental System

Consider a boat rental enterprise on Lac d'Annecy. During the peak summer season, the business must manage high-volume tourist traffic. To maximize revenue, the owner requires an online system that allows customers to secure bookings effortlessly.

System Requirements

Data Inputs (The Owner's Side)

Desired Outputs (The User's Side)

Raw opening schedules segmented by day type (weekends, weekdays, summer season).

A high-level availability matrix showing specific boat status for any given date.

Inventory list of available boats.

Granular booking slots ranging from 30-minute intervals to full-day reservations.

Hourly price lists for each boat class.

Real-time pricing calculated for the specific selected duration.

Defining "Usage" in Context

In this ecosystem, the "Usage" is the customer’s booking journey. The critical data point is not the owner's raw schedule or the price list in isolation; it is the availability of a specific boat at a specific time in a convenient format. If the system cannot instantly present this availability matrix, the user cannot convert.

The designer’s success depends on which of two mentalities they apply to solve this problem.

--------------------------------------------------------------------------------

Comparative Analysis: Input-First vs. Usage-First Design

The "Natural Order" (Input-First Workflow)

Most junior developers follow the "Forward Workflow," which mirrors the literal flow of data but creates architectural friction:

  1. Collect Input: Start with the data provided (opening times, boat descriptions).
  2. Design Storage: Attempt to design a database schema to hold this raw data.
  3. Plan for Queries: Speculate on how the data might be queried later to show availability.

The "Usage-First" Approach

This approach flips the workflow to prioritize the architectural end-state:

  1. Design the Output: Design the query and the data processing as if the data were already present in the most "convenient format" for the user interface.
  2. Define the Format: Use this ideal query to dictate exactly what the storage format must be.
  3. Design Storage: Work backward to determine how to transform raw input into that pre-defined, convenient storage structure.

The "So What?": Why it Matters

The Usage-First path removes the ambiguity of schema design. When you start with the input, you are guessing at what the database should look like. When you start with the usage, the query defines the schema with mathematical certainty. Reflecting on his own recent project experience, Boccara noted he was "impressed how it... resulted in a much faster development time." It ensures you build the right implementation the first time.

--------------------------------------------------------------------------------

The Developer’s Advantage: Expressive Design and Speed

For the individual contributor, the Usage-First principle is a form of Top-Down Design that enforces high-level abstraction and prevents "leaky abstractions."

Pro-Tips for Usage-First Development

  • Write the Call Site First: Before writing a sub-function, write the code that uses it. This forces you to design the interface from the perspective of the logic it serves.
  • Pretend Anything is Possible: Do not allow current technical hurdles or the lack of an existing library to limit your interface design.
  • Focus on the Interface: Select function names and parameters that fit the algorithm perfectly, ensuring the "call site" reads like a clear sentence.

The Insight of "Pretending"

By "pretending" a sub-function already exists, you ensure that internal implementation details—such as database IDs or complex data structures—do not accidentally "bleed" into your function signatures. This results in an interface shaped entirely by its use-case. It creates code that "fits" its environment, rather than code that forces the environment to adapt to its internal limitations.

--------------------------------------------------------------------------------

Summary Checklist for Data-Driven Design

Apply this strategic checklist to every new system component to ensure architectural integrity:

  • [ ] Identify the Consumer: Who or what is the primary consumer of this data or function (e.g., a tourist, an API client, or a high-level algorithm)?
  • [ ] Define the "Convenient Format": Without considering database constraints, what is the most efficient format for the consumer to receive this information?
  • [ ] Write the Call Site: Draft the line of code that utilizes the result before you write a single line of the underlying logic.
  • [ ] Work Backward to Storage: Based on the ideal output format, define the storage schema that makes that output easiest to generate.
  • [ ] Refine for Constraints: Only now, identify necessary trade-offs (e.g., 20% of user experience for 80% cost reduction).

Teacher’s Closing Note

Usage-First design is the discipline that separates senior architects from standard coders. It is a mental shortcut to excellence. Whether you are architecting a global booking system or a single helper function, starting with the end usage is the fastest way to build the right implementation. It replaces the "fog of how" with the "clarity of why."

For all 2026 published articles list: click here

...till the next post, bye-bye & take care

A New World Record in Beijing’s Half-Marathon between Robots & Humans

A New World Record in Beijing’s Half-Marathon between Robots & Humans

Introduction: The "Whoosh" Moment in Beijing

The 2026 Beijing E-Town Humanoid Robot Half-Marathon began as a spectacle and ended as a historical inflection point. For 29-year-old Zhao Haijie, one of 12,000 human runners, the moment of disruption arrived three miles in. It wasn’t the steady, rhythmic breathing of a human rival that signaled the overtake, but the mechanical whir of high-torque actuators and the staccato tap of carbon-fiber feet.

This was no sanitized laboratory demonstration; it was a brutal, real-world stress test of bipedal locomotion. When the machines passed, it wasn't a gradual gain—it was a "whoosh" that signaled a permanent shift in the hierarchy of physical performance. The 50-minute milestone has been crossed, and the implications for the future workforce are as staggering as the speeds themselves.

Takeaway 1: The Human Record Wasn't Just Beaten—It Was Shattered

The star of the circuit was "Lightning," a bright-red humanoid developed by Honor. In a display of raw bipedal efficiency, Lightning crossed the finish line in a blistering 50 minutes and 26 seconds.

To grasp the magnitude of this achievement, look at the data: Lightning didn’t just win; it effectively "lapped" human capability. It was nearly 12% faster (roughly seven minutes) than the standing human world record of 57:20 set by Jacob Kiplimo. Even with a late-race crash into a railing—which required a brief assist from technicians—the machine’s recovery speed was so high it still swept the podium alongside its Honor-developed stablemates.

"I felt it was going quite fast," said Zhao Haijie, the fastest human in the race at 1:07:47. "It just went whoosh right past me."

Takeaway 2: From Humiliation to Domination in Just 12 Months

In the world of robotics, 12 months is an eternity when hardware-software vertical integration is a national priority. The 2026 results stand in jarring contrast to the inaugural 2025 race, which was a logistical nightmare for the machines.

  • 2025: The winner, a robot named Tiangong, clocked a sluggish 2:40:00. Only 6 of 21 entrants finished; the rest were victims of "fritzing," overheating, or total motor failure.
  • 2026: Over 100 robots competed. Four humanoids finished under an hour.

This leap wasn't accidental. It is the direct result of China’s supply chain dominance in AI chips, sensors, and high-density batteries. Furthermore, Beijing’s 2026-2030 Master Plan for futuristic technologies has accelerated the development of brain chips and quantum computing integrations. What we witnessed was the physical manifestation of a top-down geopolitical hardware race.

A New World Record in Beijing’s Half-Marathon between Robots & Humans

Takeaway 3: The "Mike Tyson" Paradox of Modern Humanoids

The race revealed a stark developmental gap: the machines possess elite physical power but remain cognitively fragile. I call this the "Mike Tyson" Paradox—the body of a world-class athlete with the judgment of a toddler.

The environment was tellingly ironic: golf carts equipped with stretchers and wheelchairs trailed the mechanical runners in case of catastrophic failure. One unit face-planted 200 feet from the start, requiring its torso to be held together with packing tape just to continue. Another crossed the finish line with precision, only to immediately veer into a bush. Contrast these "athletes" with Xiao Pai, the two-foot-tall companion robot that spent the race bouncing along carrying a baby bottle, and you see the sheer breadth of the 150+ companies currently flooding this market.

"Robots today have the body of Mike Tyson but are still missing a brain like Stephen Hawking," explained Xue Qingheng, founder of Intercity Technology Co., whose model Xiao Cheng successfully completed the race. "Once the brain problem is solved, the scope for imagination here is immense."

Takeaway 4: 40% Autonomy is the New Baseline

Perhaps the most critical metric for industry analysts wasn't speed, but the "40% baseline." While some units were remotely piloted, 40% of the robots operated with total autonomy. These machines—including Xiao Cheng—navigated the 21km course using only onboard sensors, gait algorithms, and edge-AI.

Operating in a "wild" environment with 12,000 unpredictable humans and varying weather provides "edge case" data that laboratory simulations simply cannot replicate. For observers like 41-year-old financial worker Liu Yanli and his son Jinyu, this autonomy represented more than tech—it represented a future "sense of security" in elder care and domestic support.

Takeaway 5: It’s a Multi-Million Dollar "National Priority," Not a Hobby

This wasn't a recreational race; it was a high-stakes trade show. Honor’s victory is set to be rewarded with orders exceeding 1 million yuan ($146,500). In China, robotics is no longer a niche interest; it is a critical infrastructure play.

The mission is to move these machines from the pavement to the power grid. Developers are eyeing a future workforce where humanoids fix electrical grids, staff factories, and provide disaster response. By dominating the components—the batteries, the sensors, and the actuators—China is positioning itself to be the factory and the architect of the bipedal age.

Conclusion: The End of the Parallel Lane?

Conclusion: The End of the Parallel Lane?

To ensure safety, organizers kept humans and robots in "parallel lanes" during the Beijing E-Town race. It was a fitting metaphor for our current era of AI: we are running alongside these machines, watching their rapid iteration with a mix of curiosity and trepidation.

However, the 50-minute milestone suggests these lanes won't stay parallel for long. The speed of iteration—from the stumbling Tiangong of 2025 to the record-shattering Lightning of 2026—proves that the hardware is ready. The final question is no longer if the machines will join our workforce, but how soon these "Mike Tyson" bodies will receive their "Stephen Hawking" brains. When that happens, the lanes will merge, and the human workforce will find itself in a very different race.

For all 2026 published articles list: click here

...till the next post, bye-bye & take care

Monday, April 27, 2026

Navigating the Unknown: A Student’s Guide to Reading Unfamiliar Code

Navigating the Unknown: A Student’s Guide to Reading Unfamiliar Code

Introduction: The Mental Shift

Faced with thousands of lines of "someone else’s code" spread across hundreds of files, it is natural to feel a sense of overwhelm. You might find yourself criticizing the style or architecture, imagining that if it were only written your way, it would be "easier" to grasp. However, as a mentor, I must tell you that the core difficulty is rarely a failure of the original author or a lack of your own skill; it is simply a lack of a mental model.

When you read your own code, the map of connections already exists in your mind. With unfamiliar code, that map is missing. To build it, you must shift your perspective from critic to explorer:

"Approach code without judgment, with the purpose of understanding, not evaluating."

By setting aside stylistic preferences, you clear the cognitive space required for deep learning. Before we begin pulling on the threads of the logic, however, we must ensure your environment is configured for active exploration.

--------------------------------------------------------------------------------

Preparation: Setting the Stage for Exploration

Diving into a complex codebase without the right tools is like navigating a dense forest in the dark. To gain the confidence needed for effective discovery, you must move the code from a static set of text files into a living, observable system.

Tool/Action

Primary Purpose

Benefit for the Learner

"Smart" IDE

Indexes the codebase for navigation (jumping to definitions, finding usages).

Allows you to trace connections instantly without losing your place in the file structure.

Building and Running

Validates the environment and allows for runtime observation via a debugger.

Confirms the code is functional and provides a "live" look at how data actually flows.

Local Git Repository

Initializing a baseline (git init .; git add *; git commit -m "Baseline").

Creates a "safe zone" for fearless experimentation; you can revert any "discovery change" instantly.

Once your environment is stable and you can execute the program at will, you need a strategic entry point to begin your investigation.

--------------------------------------------------------------------------------

Strategy: Finding the End of the Thread

Code is non-linear; it is rarely meant to be read from file one to file one hundred. Think of it as many tangled balls of yarn on the floor. To make sense of it, you must find an interesting "end" and pull.

The Power of "Grepping"

To find where execution begins for a specific feature, use your IDE's global search (often called "grepping") for external markers. Search for:

  • GUI Elements: Visible text found on buttons, labels, or menu headers.
  • Command Line Options: Flags (e.g., --verbose) used to launch the program.
  • Error Messages: Specific strings that appear when the system fails.
  • Input and Focus Events: Keyboard or mouse event handlers that reveal how the application integrates with the underlying platform.

Following the Button

In a GUI-driven application, "Following the Button" is a premier tactic for building a mental map:

  1. The Two-Step Search: Search for the button's text. In localized codebases, this string will lead you to a localization mapping file. From there, you must find the Constant associated with that string, and then search for that constant in the source code to find the actual widget definition.
  2. Locate the Handler: Identify the onClick handler or the specific function tied to that widget's action.
  3. Set a Breakpoint: Pause execution in the debugger when the button is clicked.
  4. Analyze the Stack Trace: Look at the stack trace to see the path from the "main" loop to this specific handler. This reveals the dispatching mechanism of the entire framework.
  5. Map the Object Tree: Use the debugger to traverse "parent" relationships. This helps you understand the widget hierarchy—a structure similar to a DOM tree—which reveals how the UI is logically organized.

Once you have identified how the user interface triggers specific actions, the next logical step is to see how the system validates its own internal logic.

--------------------------------------------------------------------------------

Using Tests as Runnable Documentation

Traditional documentation is often outdated or missing, but tests represent the author's intent in a way that must remain compatible with the code. Integration and system tests are particularly valuable for new developers because they demonstrate the system’s "boundaries."

Runnable Documentation: This term describes tests that serve as functional examples of how to initialize the system, which access points are primary, and which use cases were prioritized by the authors.

As you form hypotheses about how the code works, use the test suite to verify them:

  • Discovery Refactoring: Write new tests or modify existing ones to see if the code behaves as you expect.
  • Pro-Tip: Treat this as "discovery code." Be prepared to delete these tests once you understand the logic. Deleting discovery code is vital; it prevents you from falling into the sunk cost fallacy, where you try to force a codebase to fit an initial (and likely incorrect) mental model simply because you spent time writing code for it.

While tests show how a system should work, reading the entry point of the program shows how it actually initializes its backbone.

--------------------------------------------------------------------------------

Mapping the Big Players: Reading "Main" and Classes

To gain a high-level architectural view, you must find the "Main-like" function—the driver of the module or program.

Identifying the "Big Players"

Read the "Main" function from top to bottom, focusing on the cardinality of the objects created.

  • The Engine: Look for "Big Players"—objects created at startup that last the lifetime of the program. If only one or two instances of a class are created (Singletons or Managers), they likely represent the architectural backbone.
  • The Anchors: Identify "Has-a" relationships. These objects hold onto other components and serve as the central anchors for your mental map.
  • The Context: Note which objects are passed into almost every function call; these represent the "Context" or "State" of the application.

Strategy Checklist for Reading a Class

When your investigation narrows to a specific class, use this checklist to decode its role:

  • Study Inheritance and Interfaces first: This reveals the "contract"—how the rest of the system is forced to view this class.
  • Grep for Includes/Imports: See which files rely on this class to understand its "neighborhood" and influence.
  • Analyze Public Functions: Treat the public API as the "command interface." Private functions are usually just implementation details; don't get bogged down in them until you understand the public commands.

After the technical reading of files is complete, the final step is to move that knowledge from the screen into your long-term memory.

--------------------------------------------------------------------------------

Solidifying Understanding: Refactoring and Rubber Ducking

Learning is best achieved through action. "Discovery Refactoring"—changing names, extracting methods, or simplifying logic—forces you to engage with the code. However, avoid "style-guided refactorings" that focus on aesthetics; these can make you arrogant and blind to the original constraints that forced the author to write the code a certain way.

The Retelling Process

To ensure your mental model is robust, move beyond "Rubber Ducking" (talking to an object) and engage in a social retelling:

  1. Synthesize Notes: Compile your diagrams and debugger traces into a cohesive story.
  2. Explain the Logic: Try to explain a feature's flow to a colleague or write it as a fictional blog post.
  3. Identify Gaps: The social pressure to be clear to another human will immediately highlight "fuzzy" areas in your understanding where your mental model is incomplete.

Explaining to a real person prevents you from glossing over details, ensuring that your discovery code serves its purpose before it is deleted.

--------------------------------------------------------------------------------

Conclusion: Embracing the Snapshot in Time

Mastering the art of reading code is ultimately an exercise in professional empathy. As you navigate these files, remember to maintain the "Compassionate Programmer" mindset. Every codebase is a snapshot in time—a reflection of a specific moment where requirements were changing, plans were unfinished, and deadlines were looming.

Diverse coding styles are not obstacles; they are opportunities to see how different minds solve the same fundamental problems. Approach the work with kindness toward those who came before you, and you will find that the code begins to speak back.

For all 2026 published articles list: click here

...till the next post, bye-bye & take care

Sunday, April 26, 2026

Stop Reading Code Like a Novel: 4 "Spoiler" Techniques for Instant Understanding

Stop Reading Code Like a Novel: 4 "Spoiler" Techniques for Instant Understanding

We’ve all been there: staring down a 500-line legacy function that feels like it was written to keep secrets rather than solve problems. Our natural instinct is to start at line one and read sequentially, just like we were taught in school. But here is the hard truth: reading a complex function "cover to cover" is a trap. It is slow, it is exhausting, and it’s often the least effective way to actually understand what is happening.

To master legacy systems, you need to shift your approach. We are going to stop being passive readers and start performing an Inspectional Reading. The goal isn’t to savor every line; it’s to gain maximum knowledge in minimum time.

The Non-Fiction Mindset: Skimming is a Superpower

We’ve been told since childhood that skimming is a shortcut or a sign of laziness. In software engineering, I’m telling you it is a professional superpower.

Source code is not a mystery novel. You aren't reading it for the prose or the plot twists; you’re reading it to acquire knowledge. Source code is non-fiction. When you approach a function, your Inspectional Reading should have two immediate goals:

  1. Determine Relevance: Is this code even responsible for the bug or feature you’re working on?
  2. Identify the Main Message: What is the high-level intent before you get bogged down in the implementation details?

"Source code is read for knowledge and understanding. Like non-fiction books. For this reason, you don't want to start by reading a function ‘cover to cover’."

Get the Spoiler: Start at the End

If a function is a story, you need to know how it ends before you care about how it began.

Step Zero: Orient with the Signature

Before you even look at the function body, look at the name, the parameters, and the return type. If the function is well-named (e.g., calculateMonthlyTax), your inspectional reading becomes a confirmation mission rather than a discovery mission. This "Step Zero" orients your brain so you know exactly what to look for once you dive in.

Step One: Find the "Protagonist"

Once you’re inside, skip straight to the last line. The logic of any function is a journey toward its output. By finding the "spoiler" at the end, you identify the Protagonist of the story.

In a perfect world, this is a clean return statement. However, in the trenches of legacy code, "returns" can be messy. Look for:

  • Explicit Return Values: The return something; at the bottom.
  • Modified Parameters: Outputs passed back through the function’s arguments.
  • Global State: Changes to variables outside the function’s scope.
  • Exceptions: Values "returned" via error-handling channels.

Whatever the form, the object being returned is the point of the function. Know the ending, and the rest of the code starts to make sense.

"Get a big spoiler, skip to the end of the function's story, and start from the last line. It should look like return something."

Spot the "Main Characters" via Frequency

Once you’ve identified the protagonist, you need to find the other Main Characters. In any function, the most important objects or variables are the ones that appear most often.

Don't just count them manually. Use your IDE to your advantage: click a variable to highlight every occurrence within the function.

By looking at the Frequency of these highlights, you can instantly distinguish between:

  • Main Characters: The central objects the function is designed to manipulate (e.g., invoice, userProfile).
  • Secondary Characters: Supporting objects that exist only for a few lines to help with a specific calculation (e.g., tempCounter, i).

This is a life-saver for massive functions. Even if you are only looking at a specific 20-line block in a much larger script, the variables that are highlighted most frequently will tell you what that specific section is actually about.

Filter for the "Main Action"

Not every line of code is created equal. To understand a function quickly, you must learn to filter out the noise. In every codebase, there is a distinct difference between the "main action" and the "bookkeeping."

  • The Bookkeeping Style: These are secondary quests. They look like if (log.isDebugEnabled()), null checks, input validation, or setting up secondary characters. It’s "administrative" code.
  • The Main Action Style: This is the domain-specific business logic. It looks like calculateInterest(), updateInventory(), or applyDiscount().

The Scanning Technique: Scan the lines rapidly. If a line looks like Bookkeeping, don't dwell on it. Even if you don't fully understand the line, move on. Your "gut feeling" will improve with practice. You are looking for the lines that actually move the protagonist toward the ending you found in Section 2.

--------------------------------------------------------------------------------

Pro-Tip: The Second Pass If you reach the end of a function and the "Main Action" still hasn't clicked, don't panic. Perform a second, rapid scan. You'll find it’s much easier the second time because your eyes are now familiar with the "landscape" of the code. The signal will naturally start to stand out from the noise.

--------------------------------------------------------------------------------

Conclusion: Mastering the Inspectional Game

Understanding code is a game of identification and filtration. When you stop being a passive reader and start being an active Inspector, the friction of legacy code begins to melt away. You aren't there to read a story; you’re there to locate the primary objects, identify the conclusion, and filter out the secondary causes.

The next time you open a black-box function, will you start at line one, or will you skip straight to the ending?

For all 2026 published articles list: click here

...till the next post, bye-bye & take care