The Worst Interview Question Ever… and How You Should Answer It


I’ve been on a lot of interviews and I’m sure you have as well.

I’ve also been on the other side – the one giving an interview, but I’ve never had a set of “personality test” questions to ask. I prefer to just start with some very open-ended questions such as:
• Tell me about the last project you worked on.
• What did you find interesting in your work for X?
• Why did you decide to leave Y and go to Z?

To me, it always seemed pretty easy to get the feel for a developer or an architect – to see their approach to work and tech – just by letting them talk. And guess what, the personality stuff just leaks out.

But it is clear from experience that a lot of interviewers do have “personality test” types of questions. And frankly most of them are pretty useless.

My absolute favorite question (read with thick sarcasm) is the following:

“What do you think is your greatest weakness?”

This is an idiotic question. I don’t care how or when you ask it, there is no way this is going to lead to any useful information – and here’s why:

There are only two ways to answer this question.

#1 – You can answer the question honestly – and of course this would be a mistake.

You just met this person.

  • “I have a pretty serious anger problem.”

Of course, you wouldn’t say that. But even if you keep this in a professional context, what are you going to say?

  • “I sometimes have a real problem working in a team because I lack the patience to see problems from other people’s point of view.”

This is an interview – not a therapy session.

#2 – The second way you can answer this is how everyone does. You can turn it into a sappy answer.

The standard choice is, “I care too much.”

I’m sure there are interview coaches and sites that tell you to do this. Clearly every politician has been coached this way.

  • “I think I’m probably too honest and sometimes this gets me into trouble.”
  • “Sometimes as a developer I care too much about the job and spend way too much of my own time after hours trying to make sure I’m staying on top of whatever I need to do.”
  • “You know, I think I’m too loyal to the companies I work for.”
  • “As a manager, maybe I’m too forgiving – I always give people a second chance, because, well, I think I just care too much!”

Somebody get me a barf bag.

But I’ve been thinking about this and I finally came up with the perfect answer to this question.

And yep – I’m going to give it to you – free of charge.

Here it is…

Next time someone during an interview makes that dramatic pause, leans forward, looks straight into your eyes as if to say, “Hey, let’s get real”, and they say, “What would you say is your greatest weakness.”

You just look right back at them and say:

“My greatest weakness is that when a person asks me a stupid question, I actually try to answer it instead of telling them it’s a stupid question.”

Of course, if you really want the job… you might just tell them you care too much.

4 Tech Conference Attendee Personality Types

You’re sitting in a cavernous room with huge monitors all throughout – waiting for a keynote.

You look around. What are all these other people thinking?

I’ll give you the answer: there are 4 personality types sitting in that auditorium.

See if you can find yourself…

#1 – The Skeptic

The Skeptic

“Yeah, it’s all vapor-ware. Can you believe this? Two developers, two weeks and you can write an app to cure cancer!”

 

 

The skeptic believes it’s all smoke and mirrors. The technology du jour is a waste of time. Next year it will just be something else.

#2 – The Wide-Eyed Enthusiast

The Wide-Eyed Enthusiast

“Wow, when we get back we’re going to convert that corporate site written in classic ASP to containerized, Angular 4, TypeScript, ASP.NET Core running on Ubuntu by a week from Thursday.”

 

The bright-eyed enthusiast drinks the Cool-Aid by the gallon. They may be in management. They’re probably your boss.

#3 – The “Realist”

The “Realist”

“Yeah, this is definitely the future, but it’s not my future. You get spun up by the bright and shiny, and when you get back to your desk you’ve got 12 stored procs to fix and 6 bugs in an SSRS Report.”

 

The “realist” sees the potential future, but can’t get past the limits they’ve created in their own mind to see how they can move forward with it.

 

#4 – The Inspired

The Inspired

“Wow, this is some sweet tech. But it’s not quite there and it isn’t going to be easy to bring it in to our environment. What can I do to get there?”

 

 

The inspired has a bit of all the other three mixed in:

  • She has the experience of the skeptic without the cynicism
  • She has the optimism of the wide-eyed enthusiast without the naiveté
  • She has the awareness of the barriers of the “realist” without the lack of imagination

In the end, it is the inspired that go places.

Architecting and developing software takes a bit of faith, and inspirational energy is as important as physical, mental or emotional energy.

A conference can be a great place to stock up on that inspiration energy – but it takes a particular personality type.

Next time you are heading off to the big show – pack the right personality!

 

 

Predictive Analytics: Models, Models Everywhere

The race is on to make the access to data-driven function points as easy as traditional function points.

Machine learning and other statistical techniques for predictive analytics have grown up in a silo. That silo is both cultural (hence requiring a cognitive and semantic phase-shift) as well as technical (requiring different languages, platforms and runtime-environments).

ML Coming to a Stack Near You

Tearing Down the Walls

These barriers are beginning to break down.

Tools like Azure Machine Learning Studio and Amazon Machine Learning are bringing this capability into the mainstream development community. That process will continue and will require an ongoing training effort.

But to date, the effort to bring the results of those efforts into the software stack are high. To be sure, you can call an Azure Web Service or Amazon API, but for many there are business and technical reasons this is a “no-go”.

Microsoft’s recent move to bring R models into SQL Server was a huge step. Stored procedures have a long and glorious tradition within the software stack – particularly in enterprise solutions.

But the move to SQL Server can only be seen as a first step. There are two big problems here:

  1. This supports R Models only.Models created in Azure Machine Learning Studio cannot be ported into SQL Server. Certainly, you can build, train and evaluate your model in Azure ML Studio till you get a great model and then re-build it in R – but that is just the kind of extra-step that hinders adoption.
  2. SQL Server is often not the database being used.

It’s Not Enough

So why can’t I have my model where I want it?

From a technical viewpoint, there is nothing here that restricts the runtime. (I may be wrong here, and would be happy to be enlightened.) The data is required to train the model but not for its execution.

Stored procedures are great – but that is actually not what models are. They do not require proximity to large data sets or a specialized execution engine.

It’s just a function!

What is the problem here? We know how to create portable software.

Fun Times Ahead

IMHO, this is a short-term engineering issue and we will soon see model availability change rapidly.

Microsoft’s announcement last week at Build 2017 of Azure IoT Edge is, again, a stop-gap solution. An early look shows some crazy-cool functionality but it is a niche solutions that is quite heavy and has significant platform dependencies.

Models need to be portable in the same way that other functions are portable.

In the end, we will just have two ways of building functions:

  1. We can write high quality functions by deriving algorithms from the heads of experts (or becoming an expert ourselves)
  2. We can derive models from data (likely with the help of experts) and build functions to interrogate the models

Models Everywhere

Once we can do that on any platform, within any stack, and in any layer we will be in the place to realize the vision of “intelligent apps” that bring predictive analytics directly to the point of decision.

Models, models everywhere… I think we’re on the brink.

Simplify Your Workflow: Be Your Own GC

Purging brings clarity.

I don’t know how. I don’t know why – but for some reason there is inevitably a sense of clarity and a release of creative energy when we purge our stuff.

David Allen, in his breakthrough book Getting Things Done, talks about the fact that our mind has a background loop going over all the things that don’t have closure. When we don’t purge, our brain is still spending cycles on anything and everything that remains unresolved.

Purging also brings clarity because, frankly, our brains are not that great at handling large lists. We have a very limited number of registers in our brain; studies show that it is 7 plus or minus two. Any set over 10 items must be handled by abstraction and categorization.

But stuff, on the other hand, seems to proliferate. Whatever management systems we use to organize our stuff – from stacks on the floor to mind maps on our hard drive to proprietary “Stuff Management” systems – once the number of items in a category gets large, we tend to abstract the whole set to – “Stuff I Need to Do” or “Stuff I Should Read” or some other useless category. But even when we organize our stuff into well managed, small-list categories our priorities change over time. Time opens up a gap between the organizational structures of our management systems and how we think about things today.

Given the value we gain from purging our stuff, why do we do it so infrequently?

Personal Garbage Collection

We need to establish the habit of personal garbage collection.

In software frameworks, Garbage Collection (GC) involves walking a heap of stuff and discarding every object that no longer has “liveness” – that is, they are no longer reachable through the active set of objects. Modern frameworks have determined that memory is precious enough that it is worth the time to “stop-the-world” and execute an efficient GC algorithm.

As a software architect, the most precious resource is focus, so for us it is worth it to make frequent “stop-the-world” personal GC Sessions.

Yeah, that CORBA security book needs to get tossed

Run a GC on Your Office

Most of us have a bookcase, or maybe two in our office. Glance over – what percent of those books have any value right now.

Seriously, it’s time to purge.

Yeah, that CORBA security book needs to get tossed and the Pro Silverlight 2 in C# 2008 can go. What about those free books you got at the conference 3 years ago – yeah, they are worth what you paid for them – time to go.

It’s time to go through your books, your magazines, the crap stacked on the floor. Most of that has to go.

The pointers to those objects have long since been deleted. It’s pretty amazing that a query with the predicate – “Will I ever open that book again?” can probably yield false on 80% of the books on your shelf. Clearing out the chaff is good for the mind.

Next time you glance at the shelf you will see books that are useful, books that inspire you, books that you could read.

Purge Your Physical Work Space

When’s the last time you purged your physical work space?

“Wait a second,” you say, “I have 3 hours to finish the initial design and present it to the stakeholders.

Seriously, you’re telling me to take time to purge my physical space.”

You know what good old Abe Lincoln said – “Give me six hours to chop down a tree and I will spend the first four sharpening the axe.”

To work on a design, your axe is your brain. The process of software design is the creative process of balancing forces. It is complex. It requires some serious focus.

Purge, and then think. We know we need to close the door, turn off e-mail, silence our phone, whatever else we can do to cut out distractions. But distractions can be internal as well as external and purging removes distraction. Looking down at an empty, clean desk somehow triggers the brain that there is just one task for the next 3 (ok, maybe 2 ½) hours.

Of course, it could just be me – but I don’t think so.

Run a GC on Your Management Systems

The systems we use have a tendency to get stale. I love mind maps for visualizing what I’m working on.  Adding elements is an extremely low-friction activity. But mind maps seem to have a half-life. They have to be garbage-collected on a regular basis.

Mind maps, bug/task management systems, all those “Read Later” folders in your e-mail system, they all represents heaps that need to be walked.

Personal GC Algorithms

So how do we actually perform the GC?

I’ve found three algorithms valuable.

Purge by Moving

One algorithm mirrors what happens when we physically move – whether it is our home or our office. Stuff accumulates because once it is in place it takes effort to remove it. When we move, it takes more effort to move than to throw away. That effort differential gets weighed against its value and suddenly a high percentage of stuff gets thrown away.

Of course, digital assets take no effort to move. We have to artificially create the dynamics of a physical move.

When you come to the end of a milestone (real or simply a time marker like the end of the month), force yourself to recreate your lists.

Friction is good in this instance. Make it so that it takes an action to bring forward items in your list. This forces you to re-evaluate every task/item in terms of your current interests and priorities.

Purge everything that can be purged. Be brutal.

Another heap to walk!

Rank and Cut

A second method for purging is Rank and Cut.

So, let’s say you have a “stack” of articles you’ve bookmarked or, if you’re old-school like me, you have printed out to read when you find yourself with a few extra minutes.

This is a great plan, but after a while the “stack” gets a bit large and, more importantly, stale. This means that the stack no longer inspires you.

You don’t look at the stack and say – “I wish I had a minute or two to get into those interesting articles,” but more like “Someday, I really should read through those articles.” Inspiration has turned into duty. The size of the stack is inversely proportional to the likelihood that anything will get read.

It’s time for “Rank and Cut”. Here’s the algorithm…

  • Step 1 – Select an integer <= 9 (the max # of registers in your brain)
  • Step 2 – Bubble sort
  • Step 3 – Truncate to that integer.

Step 1 usually is pretty easy. My advice, go low. Let’s say we pick the number 5.

Ok, now do the bubble sort. Select the first article and put it at number 1. Pick the next is it more interesting than #1? If yes, put it as #1, else put it as #2. Pick the next, is it more interesting than #1? If not, is it more interesting than #2? If no … yeah, yeah, you know how to do a bubble sort. In the end, you have the stack listed by how much they inspire you to read them. Of course, you will find several that inspired you in the past but have no interest to you now – you can simply purge those.

Now the hard part. Truncate to that number 5. Don’t waver at this moment. You must be brutal – just cut to 5.

Now you have a stack of articles that are truly interesting, and the most interesting is on the top. The stack should inspire. The next time you head to the dentist, you are much more likely to pick up the top one or two and use your time in the waiting room better than leafing through a 10 month old golf magazine.

Archive and Purge

For some reason, we have a reticence to purge. For the most part I think this is F.O.L.S. – the Fear of Losing Something. What if things change and we need it?

The best way to overcome this obstacle is a two-step process – archive and then purge. Fortunately, because we live in a digital world this is a pretty simple process.

Make a backup before you start your purge.

One version of this strategy is great when preparing a presentation. Let’s say you’ve been gathering material for a presentation. You have a document with notes from several articles, snippets of code you might use, ideas about points you want to emphasize, rules-of-thumb you have gathered from your experience, etc. And now it’s time to put together the final presentation.

Take the document you have created and make a copy (I call it my “Tear Down” copy). Now work only with the Tear Down copy. At the top of this document, put 3 things –

  1. Attributes and goals of your audience
  2. Your mission for the presentation
  3. A short list of goals to support that mission

Now, with those key elements in mind, heap-walk your document and delete everything that does not have an active reference to your audience, your mission or the goals you have chosen.

Be absolutely brutal.

You can have the courage to be brutal because you have an archive copy of your document in case you realize you need something. When you are finished, the document will be much more focused toward the presentation you are preparing and you won’t have to wade through pages and pages of irrelevant material as you put together your outline and content.

When the time comes to cover the same material for a different audience, run the algorithm again using the original document applying the new attributes and goals of your second audience.

Be Your Own GC

Most modern apps rely on frequent and efficient GC.

Being a good modern architect can be supported through the habit of frequent and efficient personal GC.

 

The Analytics Layer

Application Architects love layers; and I think it is time for a new layer – the analytics layer.

“You know what else everybody likes? Parfaits! Everybody likes a parfait.” – Donkey

Data analytics have become too compelling to ignore any longer – even at the application level; and not just within specialized applications but within the standard application software stack. Architects have always been about taming complexity and the techniques coming out of the analytics world bring powerful tools to the table. We’ve always had a data layer. It’s time for the analytics layer.

Early trends in data warehousing and BI had the unfortunate side effect of creating silos within organizations and within software applications. Analytics was a separate entity. Star schemas, denormalized data and batch processing took significant processing power and therefore was expected to be “off-loaded” from the standard operational system. Dashboards and pivot tables were bolted on as a way to see into the state. And though over time the delay was minimized to the point of “near-real-time” the silos remain. Today, analytics is not a layer; it’s a module.

But the true power of analytics lies in their ability to provide effective simplicity. Google became the most profitable company in the world not by creating a simple search, other companies had the idea of a single text entry, but by making that simple search nearly always find what the user was looking for through developing the most effective ranking algorithm. Google didn’t just build a data layer, they built a high-powered analytics layer and stacked their software on top of it.

An analytics layer holds the promise of delivering intelligence directly to the point of decision within the application. Up to now we have offered users choices but expect them to bring the intelligence to the table. We need to go beyond explaining the options available to bringing related real-time information and statistics to bear on the decisions being made.

Consider two simple examples: Amazon and Stack Overflow. Both of these websites changed the way decisions were made based on bringing highly reliable information to the point of decision. The approach is different – simple ratings vs. stack ranking – but each generated incredible gravity for their site because of the power of aggregated, targeted, reliable information delivered right at the point of decision. Both applications rely on a powerful analytics layer.

Interestingly, the analytics delivered in these examples did not require advanced algorithms, neural networks or machine learning – but rather the answer to one question: “What information would give the user the best chance to make the right decision?”

Sometimes the answer to that question is extremely simple – a basic comparison, information about what other users decided, etc. In other cases, more sophisticated algorithms and statistical methods are required. But in all cases, the question needs to be asked. Having an Analytics Layer forces the question to be asked. When designing the Data Layer, the Data Architect asks what options need to be provided to the user. When designing the Analytics Layer, the Analytics Architect asks what intelligence gives the best chance for a successful choice. And when designing the UI Layer the UX architect asks how best to bring those two things together.

Vendors and platforms are touting data analytics products and tools, but until application architects begin to think in terms of an analytics layer the true potential of these techniques will remain largely untapped.