Wednesday, January 29, 2025

A Journey of Professional Growth (LLM generated)

 (Early 2025: LLM generated in one go without retouch)

From Command and Control to Coherent Models: A Journey of Professional Growth

Throughout my career, I've had the privilege of wearing many hats – from Product Owner to CTO, Software Architect to Managing Director. Each role has been a stepping stone, shaping not just my skills but also my fundamental approach to work and leadership. Reflecting on this journey, I see a clear pattern of evolution, a shift from reactive, belief-driven actions towards a more strategic, data-informed, and model-driven approach. This blog post is a reflection on that evolution, broken down into key areas of growth that have defined my professional path.

1. From Command to Cooperation: Embracing Collaborative Leadership

My early roles often saw me operating within a "command and control" framework. As a Software Engineer, I received instructions and executed them. As I moved into leadership positions like Product Owner and Head of Software Development, I initially carried this mindset forward, focusing on directing my teams. However, I quickly realized the limitations of this approach.

The real breakthrough came when I began to prioritize cooperation over command. This wasn't about relinquishing control but about fostering a shared understanding and empowering my teams. Instead of simply dictating tasks, I started focusing on:

  • Product Owner: Helping all engineers understand the why behind customer needs, fostering a culture where the product's purpose was clear to everyone.
  • Head of Software Development: Cultivating an environment where engineers collaborated, mastered their technology, and took ownership of their work.
  • CTO: Expanding this collaborative spirit to the entire company, ensuring everyone was aligned with our technological vision.

This shift from command to cooperation was fundamental. It created a more engaged, innovative, and ultimately, more effective work environment.

2. From Relaying to Reformulating: Mastering Strategic and Tactical Excellence

Another significant evolution in my work style was moving beyond simply relaying information to actively reformulating it. Initially, I acted as a conduit, passing on requirements, needs, and feedback between stakeholders. This was useful, but it lacked depth.

Reformulating, on the other hand, involved:

  • Deep Understanding: Truly grasping the core needs, often complex and multifaceted, of customers, technology, and the company itself.
  • Strategic Framing: Translating these needs into actionable plans, considering both long-term goals and immediate tactical steps.
  • Clear Communication: Articulating these reformulated needs and plans in a way that resonated with different audiences, whether it was engineers, executives, or clients.

This ability to master and reformulate became particularly crucial in roles like:

  • Product Owner: Transforming raw customer feedback into well-defined product strategies.
  • Head of Software Development: Developing a technology roadmap that balanced innovation with stability and scalability.
  • Technical Advisor: Helping companies understand and strategically leverage specific technical domains.

Reformulation became a powerful tool for bridging the gap between high-level vision and concrete execution.

3. From Reacting to Optimizing: Embracing Agile and Finite Horizon Planning

Early in my career, I often found myself in reactive mode, constantly putting out fires and struggling to meet deadlines. This "firefighting" approach was exhausting and unsustainable. The turning point came when I embraced the principles of optimization and finite horizon planning.

Optimization, in this context, meant:

  • Balancing Priorities: Carefully considering the trade-offs between different projects, tasks, and goals.
  • Resource Allocation: Making informed decisions about how to allocate time, budget, and manpower effectively.
  • Iterative Approach: Breaking down large projects into smaller, manageable iterations, allowing for continuous learning and adjustment.

Finite horizon planning complemented this by:

  • Setting Realistic Goals: Defining achievable objectives within specific timeframes.
  • Regular Review and Adjustment: Periodically reassessing progress and adapting plans as needed.
  • Predictability: Creating a sense of stability and reducing the stress of constant firefighting.

This shift was evident across various roles:

  • Product Owner: Prioritizing features and managing sprints to ensure consistent delivery.
  • Software Architect: Designing systems with a focus on long-term maintainability and scalability.
  • Managing Director: Developing a strategic roadmap that balanced short-term wins with long-term growth.

Embracing optimization and finite horizon planning transformed my approach from reactive chaos to proactive, sustainable progress.

4. From Beliefs to Data: Embracing Data-Driven Decision Making

Early in my career, decisions were often based on intuition, experience, or even just gut feelings – what I'm calling "beliefs." While experience is valuable, it can also be subjective and prone to biases. The pivotal shift was moving towards data-driven decision making.

This involved:

  • Gathering Relevant Data: Collecting information from various sources, such as user feedback, market research, and performance metrics.
  • Rapid Validation: Using data to quickly test hypotheses and validate assumptions.
  • Reducing Lag: Minimizing the time between making a decision and understanding its impact.

This data-driven approach became a cornerstone of my work, particularly in roles like:

  • Head of Research: Using data to guide research direction and validate findings.
  • Software Engineer: Using data, quality assurance and requirements to track down bugs, and understand if a feature works as intended.
  • CTO: Leveraging data to inform strategic technology decisions and measure their effectiveness.

By embracing data, I was able to make more objective, informed, and ultimately, more successful decisions.

5. From Data to Models: Embracing Coherent Decision Making

The final stage of my evolution has been the move from simply using data to building and utilizing models. Data alone can be overwhelming and difficult to interpret. Models provide a framework for understanding data, identifying patterns, and making predictions.

Models, in this context, are:

  • Simplified Representations: They capture the essential elements of a complex system or process.
  • Data-Aligned: They are built and validated using real-world data.
  • Decision-Support Tools: They help us to understand the potential consequences of different choices and make more coherent decisions.

This model-driven approach has been particularly valuable in roles like:

  • Software Architect: Using models to design robust and scalable systems.
  • Technical Advisor: Helping companies develop models to understand and optimize their technology investments.
  • Managing Director: Using models to forecast business performance and guide strategic planning.

By embracing models, I've been able to move beyond simply reacting to data to proactively shaping outcomes.

Conclusion: A Continuous Journey of Growth

My professional journey has been a continuous evolution, marked by a shift from reactive, belief-driven actions to a more strategic, data-informed, and model-driven approach. This evolution has not been linear, and each stage has built upon the previous ones.

From embracing cooperation over command to mastering the art of reformulation, from optimizing processes to leveraging data and building models, each step has equipped me with new tools and perspectives. This journey is far from over. As technology and the business landscape continue to evolve, I'm committed to continuous learning and growth, always striving to refine my approach and make a meaningful impact. I hope that sharing my evolution can be helpful and educational to others on their own professional journeys.

 (To conclude: good LLM prompt writing makes all the difference!)

All original content copyright James Litsios, 2025.

Saturday, November 09, 2024

Why the struggle with functional programming?

Slow FP adoption...

Why no widespread adoption even though functional programming exists already now for over sixty years?

Not so long ago I explained this as:

The reason is actually pretty complicated: The strength of FP is much due to its ability to be very strong. This strength in the code weakens the ability of developers to make local changes. Initially, this sounds like something good, yet it creates a new challenge: developers are bound to the constraints of code written by others, and they are not only not happy, they are less productive! You can solve this by bringing your FP to the next level: math. A few companies do this, now a developer is not subject to constraints but to mathematics. If the developer understands the math she/he finds this acceptable.
I am simplifying things a bit here, yet FP, or any language that is strong enough, brings in a whole complexity of technical ownership and people dynamics that does not need to be dealt with with OO. The reality is that FP allows more scaling, but maintaining the stability of people within that scaling is a major challenge.

I want to present this a bit differently here, because the above put too much blame on people. Yes, FP is tricky for teams, yet the root cause is not the teams, the teams just highlight the root cause!

The Design - Features - Tasks trilemma

Let's start with the following trilemma

Design - Features - Tasks

And by trilemma, I mean that you cannot simultaneous set goals to progress all three. (And by task, I mean "effective coding work"). You must specify two of these, and let the third one be determined by the outcome of your work. You can specify design and features, you can specific design and tasks, you can specify features and tasks... but you cannot specify design, features, and tasks before you start working.  To be more concrete: when you focus on architecture before coding, and come up with a design, you are in fact constraining the features that will be implementable with tasks that follow that design. If now you specify additional features, before you start your tasks using the given design, you will likely fail.

Functional programming done seriously "ties" design and tasks together. By which I mean that FP that pushes its higher order design model highly constrains the design. The result is that by construction, a team that pushes its FP design culture, is also a team that is "squeezing themselves" into an over constrained design-features-tasks trilemma. To be specific, an FP team may be imposing design, features and tasks all at the same time, and not understand that this is an over-constrained setup that will most often fail.

There is a solution: Just like I mentioned in that earlier post, you get a around trilemma by splitting one of its terms. Here we split tasks into design tasks and non-design tasks. The approach is then the following:

  1. Given some desired features
  2. Work on design-tasks so that the design can implement the features
  3. Then work on non-design-tasks, that implement the features with the developed design.

From a work triplet perspective, we have:

  1. features & design-tasks -> new-design
  2. features, non-design-tasks -> implemented features

Here, the new-design produced by 1, are used in 2.

However, the challenge is that now we have a two phase work process, and by default teams, especially agile teams, are single phase process. Agile typically asks teams to focus on tasks that deliver MVP features, and therefore agile typically sets people up to fail, as there is no new feature in good FP without new design, and... and teams often struggles to juggle the three needs to progress design, features and tasks within a same agile effort.

TDD helps

Test driven development (TDD) is one way to escape the limits of a one phase agile process. The idea is the following: 

  1. Given some design features
  2. Develop tests for these features, AND extend the design to cover the features.
  3. Then work on non-design-tasks, that implement the tasks, and pass the tests.

However...

Yet there is still a challenge: design changes most often depend on strong opinions. FP depends on design changes. FP depends on strong opinions. Getting teams to gel around strong opinions is often not trivial. And that why I wrote the statement shared above.

All original content copyright James Litsios, 2024.

Sunday, October 06, 2024

A software mind's eye

I have been writing higher-order functional programming in Python for the last few weekends:

  • hofppy (https://github.com/equational/hofppy) will be a Python library. For the moment it is a collection of Jupyter notebooks.
  • My initial goal was to have a handy FP toolkit which supports applied math with JAX's JIT. Yet I realise that in fact what I am really doing is reimplementing the generic part of a compiler for a reactive (trading) language I wrote in 2010 in F#, while including a few tricks I picked up since then, the primary one being to think of code as implementing a synchronous execution model.
  • There is really very little code like this on the web, therefore why I am doing this open source.

This blog is in part written to mention the above, as already the first JAX JIT supporting monad and comonad models are "nice". Yet this blog is also to bring up the subject of the process of creating new technology.

My recipe to do something new, such as lead a team on a new subject, write software that did not exist before is the following:

  1. Create a "mind's eye" imaginary world of "usage" of the end result.
  2. Imagine how it should work.
  3. Start very quickly to implement features.
  4. Review 1, 2, and 3, as you regularly cycle through them, again, again, and again!
I use the word imagine twice above. For a few reasons...

People are constantly asking me "how do you know?", with regards to client's requirements, technology, design, approach, etc. The reality is that I do not completely know: I know much, I have done much, written hundreds and hundreds of thousands of lines of software. Yet when something is new, no one knows fully how things will look in the end. The reality is the newer it is, the less you know. And... the more you think you know, the more you are fooling yourself. However, just like with poker, the optimal approach is to imagine much and often, and to work as hard as possible to build and maintain a mindful view of "new usage" and "new functioning technology".

The more experience, the better! As your imagination is only as good as the real details it contains. I have tons of real-life learning that I use for this higher-order FP in python. I mention here a few of them:
  • Lazy software design are easier express in code.
    • The most flexible design, is the code that does nothing but accumulates lazily what it could do, until finally because output is expected, it works backwards from output expectations to pull out the desired output.
  • Multi-stack semantics is perfectly normal. 
    • For example, in the project's above monadic code, the free monads have "their" variables, "normal" Python has its variables, and these are carefully kept separate.
    • Multiple flows of causality exist in the real world, stacks of various semantics is the only cheap way to capture these cleanly in software.
  • For every Ying, there is a Yang.
    • If there is more, there is also less; If there are variables, there are constraints, etc.
  • Structure is more important than types.
    • I started in functional programming without the types. Then I embraced types. And yet to understand that the sweet spot is somewhere in between. The reality is that it is much easier to use code to constrain software "just a right level" than to use types.
  • Math is the most powerful structure.
    • Think geometry, topology, and dualities!

 All original content copyright James Litsios, 2024.