Tuesday, September 13, 2016

Frameworks, What Are They?

Here is an example of a framework.

You (on the left) want to reach your users (on the right), across a great chasm. The scaffold at the middle is your starting base framework. It's the System.out.println of Java. It's the Cocoa framework that iOS provides. It's Node.js for your backend servers. It's the Elastic Load Balancer on your AWS stack.

So when using this framework to complete the chasm, there are two methods. You can either,
  1. Use it, or
  2. Build on top of it.
Using the framework looks something like this:
You have accomplished your goal of reaching your users. But it's obvious that any extra stress on this structure will collapse. See my post on why managing stress and scale is essential to growing your product and company.

It's appropriate to use a framework for
  • hackathons, and
  • prototypes or proof-of-concepts.
Honestly that's the only two valid reasons I can think of for straightforward use of a framework.

"But what about agile development of MVP (Minimum Viable Product)," you ask?

I'll explain more when discussing building on top of frameworks, which looks something like this:
Notice that when you build on top of a framework, you're using the same pattern of structure that the base framework is composed of. In fact, you are extending the base framework, and building a framework of your own. 

You've added a little bit of compound interest to the world of software. If you do it right, this can be used by other people for similar chasm crossing type of problems. In fact notice the simple bridge that finally spans the chasm. If someone with a similar problem, but a wider chasm, they can copy the bridge but extend it further.

Also notice that this structure is more stable. It can bear load without breaking. It can scale without having to be torn down and re-built if the chasm were to increase, or the cliff become higher. Which will happen if you are expecting to be successful.

What about MVP? 

As the product owner, how can you expect your developers to spend all their time and energy building these structures? You need to move FAST, you don't have time for these shenanigans. 

The fallacy you're making is that it takes more time to build these structures

Remember that you're re-using already existing patterns from the base framework. So developers don't have to think about new ideas and new solutions, which will come with new problems and new corner cases. A developer experienced in frameworks and common design patterns can build one of these just as fast as the developer who just uses the framework.

In fact, I will attest with experience that building on top of a framework is FASTER because you will save time in the future from de-bugging and re-building when situations change.

Common framework mistakes

You have heard of the term over-engineering, Here is a depiction of what over engineering may look like:
If just using a framework is too cold, and building on top is just right, then over-engineering is too hot for Goldilocks to handle.

Some reasons that lead to over-engineering are egotism and impatience.

When I was a young developer, it was boooring to use the old tried-and-tested methods. Even as an experienced developer, the ego creeps up, and I think "I could integrate this common library to solve this problem that's been around since the beginning of time... OR I can build my own! Wouldn't that be fun??" And the answer is usually NO. Usually though, taking these detours is a great source of expanding my skills and knowledge as an engineer. But still, for the sake of your business, limit these meanderings.

Sometimes, I just don't want to spend time reading documentations and blogs on a topic. I'm ignorant of being ignorant. So I'm just going start doing something without researching if there's a solution already out there.

There's also a lack of appreciation for simplicity. It's an ego boost to be able to create something complex, something hard to understand for anybody else without my "great" mental aptitude. Who else would have thought to solve that particular problem with a frigging catapult?? The answer is no one who's experienced.

Compound interest and the Information Revolution

There's a well known urban legend about Einstein saying that compound interest is the most powerful force in the universe. Even if he might not have said it, I think he probably did appreciate the power of incremental growth. There are other common phrases like "standing on the shoulders of giants" that illustrate the same point.
Frameworks are the compound interest of programming.
A framework is a vague term in the software industry. So is a library, API, module, node, package, and so forth. They're all either related, or they all mean the same thing. It depends on who's talking and how they're using the terms.

The general idea is that a framework is something used to solve your problem, and most likely you didn't build it. For programmers, it's a relief to not have to build something, not because we're lazy, but because we know we'll just mess it up the first time. There are frameworks/libraries/packages out there that are old and reliable, all the bugs have been hashed out with use.

Frameworks are the reason why software has become such a world shifting power. In the past half century, electrical and software engineers have been incrementally growing the seeds of ideas such as Unix, HTML, Javascript and hundreds of other components. Now some of these have matured and offer tremendous leverage, allowing one person now to do the work of 100 people ten years ago, and the pace of growth is also speeding up. 

Wednesday, September 7, 2016

A bit of logic: How computers works

There was a course in my freshman year that we called the weed class. Unlike what it may sound like, this was the unfortunate opposite. The class was notorious for weeding you out of software engineering.

What made this class hard wasn't in the name, Digital Design Fundamentals, an introduction to boolean algebra and circuit design. It was hard because it was the first time I was made to understand how a computer works.

Little bits

You've heard that bits, 0 and 1, are how information is stored in a computer. But how do 0's and 1's translate to this application you're using to read this right now? While being useful as information in the static form, it's when you move bits around, make them dynamic, that you're able to do all the things we've been able to accomplish in the last half of the 20th century.

How are bits made dynamic? Through the use of boolean algebra. Which sounds very scary, and it is... or is it? Boolean algebra is also why information is stored in computers as bits.

It's all very logical

Luckily, I was cut out to be a software engineer, barely. Digital Design Fundamentals was the first C of my college career, one of the best C's I've ever gotten, full of great lessons. It wasn't until the last semester of my super senior year that I took the class that should've been Digital Design's pre-requisite: Introduction to Logic, which was actually an introductory course for law students.

Boolean algebra is a mathematical way of representing and solving logical problems. Boolean algebra uses ANDs, ORs, and NOTs as operators, just like you would use addition and subtraction to operate on numbers, instead in boolean algebra you operate on statements.

You are actually very familiar with logic. Not only are AND/OR/NOT operations the building blocks of computing, it's the building blocks of rationality. When you have a conversation with someone, you take turns talking (unless you're rude). Each time you talk, you present a package of communication. This communication is made of statements and a conclusion, or a point, derived and supported by the statements.

So you might present a piece of communication to someone as such:
  1. It's a really hot day today.
  2. We can become dehydrated from heat.
  3. We should drink some water.
This statement can be represented as: A and B then C, or A & B -> C.

The listener of this communication may choose to evaluate this communication, which is to check if each part statement (A and B) is  true or false, and whether the conclusion (C) is valid or invalid.

For example, maybe the listener doesn't agree that A is true. To her, it's actually a cold day, A is false. But B is still true, you can become dehydrated from heat. Since A isn't true though, the listener doesn't agree with the conclusion, C, that water should be drank, and so C is invalid.

This is an example of boolean algebra.

At first there seems to be just these three operators, like in mathematics you start out with addition and subtraction. But if you're adding the same thing to itself multiple times, why not derive another operator, like multiplication? Same for subtraction, with division? In boolean logic there are other derived operators: NAND, NOR, XOR.

Why bits?

Isn't it odd to choose something so limiting like 0 and 1 for a numerical system? The common numerical system of this age is the decimal system. You count from 0 to 9, then start over again by incrementing the tenths place, 10, 11, 12, and on and on. There are other numerical systems like hexadecimal, which goes from 0 to 15 (represented as 0xF) and then starts over again.

Seems like the starting over again, and keeping track of how many times you've started around again, is very tedious. So why would anyone want a system where you start over again after only 1?!

The answer to this question lies in boolean algebra. To evaluate the expression, A & B -> C, the listener had to check whether each of the statements (A, B) were true or false, and whether the conclusion was valid or invalid. That's the only information necessary to evaluate any expression. You can imagine that expressions can get really large and complex. Philosophical arguments among people can become very complex and heated, each statement building on top of previous complex statements. But independent of how large an expression gets, the only thing you need to know two things like whether the statement is true/valid, or false/invalid.

Two things, dos, double, bi... binary?

The reason that computers store information in bits, 0's and 1's, is not to to let computer geeks communicate with one another through secret binary code. The reason information is stored in bits is to be able to evaluate logical expressions using boolean algebra. 0's represent false or invalid or off or whatever negative bit you need. 1 represents the positive true, valid, on, etc.

Manipulating information

I said that the way you go from static information to a dynamic application is through the manipulation and movement of information. How is boolean algebra used to manipulate information?

The first and obvious use for a computer is to be a calculator. We know, from classical mathematics, that you can do a lot of wonderful things if you can just start with the basics of adding and subtracting. One tremendous bottleneck to human achievement is how quickly we can add and subtract things.

If you can add and subtract quicker, you can do a lot of things you weren't able to do before, like go to the moon.

The invention of computers was driven largely through this need to do fast computations. And that can be done using AND and OR logic on a binary representation of any number. Here's an example of how you do 5 + 6:

Works very similarly to when you add two large numbers together and then carry the tens place over if the sum on any column goes above 9. But in a binary number system, you carry over if you go past 1.

Now, the above diagram is lying to you, it leaves a lot of things out. You can see that they're showing you the addition operation on each column, starting from the right-most column. But there is no addition operator in computers. There is only the AND, and OR, NOT and all the derivative operators. So what's going on here?

What you're looking at is actually a representation of a process where a some logic acts on each column, starting from the right, moving toward the left. This little logic machine looks something like this:

Don't get discouraged! All you're seeing is a visual representation of a couple of statements using the AND operator, which is that big D looking thing on the bottom, and an XOR operator which is the bullet looking thing at top. The two statements are:

  • A XOR B -> S, and
  • A AND B -> C

This is a very elegant little machine that will add a column of bits, like the 5+6 in the earlier diagram. The A, and B is the input bits from the 5 and the 6. The S is the output, and the C is any carry-over.

Understanding how an XOR works, and fully understanding this adder machine, isn't necessary at this point. What's necessary is to understand that this can be used on each column of a bitwise addition, from right to left, like you do when adding large numbers.

As a child you're taught some simple rules like starting from left, summing everything in the column, taking the carry-over and putting it in the next column and then moving on. This adder machine represents that process.

Other simple processes can be turned into more machines, there are similar simple machines to do subtraction and multiplication and a host of other manipulation of information that is required for math to occur.

Moving information

When we consider the movement of information, it's not just about moving or copying information from one location to another. What we are interested in is the process by which this movement occurs. And, just like how manipulation of bits ended up being about simple machines, movement of information also requires simple machines.

There's quite a lot of machines that are used for movement of bits, but one particular machine is the key-stone of information movement, the latch:

This machine, made of only two NAND operators, is called a latch because Q, the output, latches onto whatever S is and doesn't change even if S changes. It won't change until R, resets the latch. Then the next time it gets another input from S, it'll latch to that again. So in essence it remembers things, and that's essential when moving or copying information, you need some temporary remembering of the information you're moving.

Build it up from here

Obviously there's so much more to all this, and I don't remember any of it anymore. Remember I got a C in the class.

What I do remember is that these machines build on top of one another. Simple AND and OR's are combined to create adders and latches, which are put together to make multiplexers and decoders. These are combined to make machines that allow you to add and subtract 8-bit numbers and text, and move or copy them from one location to another.

These are then packaged into hardware like the CPU, GPU, RAM, BUS, etc. These hardware components are then connected together on motherboards that are hooked up with input devices like mouse, keyboard, modem, and output devices like speakers, monitors, and Oculus Rift.

And all of it is just a bit of logic.

That's a pun, it's actually a lot of logic that's happening really fast.