Computer science is maturing, and that's a problem. In short, we may soon face a situation where nothing is completely new. Of course, technology can help. Internet search engines help turn up lists of previous publications. But just reading all the abstracts can be daunting. In the long term, we can hope that Artificial Intelligence creates a grandson of Watson that can analyze previous work and help discover a new wrinkle that hasn't been tried. But what can we do in the short term?
Fortunately, we have a solution that makes research quick and easy. The approach is starting to catch on. This essay explains how you can use it effectively.
The other day, a newspaper article announced that the Internet would soon be replaced by a brand new system. Wow, what a story. It's huge. If the global computer communication system we know and love is going to be cast aside for something new, shouldn't the newspapers give everyone plenty of warning? In fact, shouldn't a banner headline scream at us:
We'll need time to get ready. Maybe the new Internet will have a flaw and it won't work. What if we wake up one morning unable to order from Amazon.com? What if we are unable to connect to Facebook, or Instagram, or any other site? What if everyone stopped receiving email about the millions of dollars waiting for them in the latest bank transfer scam? The media always carry warnings when a disaster is about to affect a few thousand people, and the impact from replacing the Internet will be much wider. So why aren't newspapers running stories that generate fear and panic?
You probably guessed the answer: the reporter is mistaken.
The reporter made a common mistake: confusing the Internet with the
World Wide Web.
The article wasn't about the Internet at all — it was about
HTTP, a technology used in the World Wide Web, and the Web is only one
of the applications that use the Internet.
Making a change to the Web might affect Web browsing, but the
Internet would remain unchanged, and other applications would continue to
operate.
It's a technicality, but something a good reporter should
have gotten correct.
The article argued that replacement of the Internet is both inevitable and
imminent because current technology doesn't work.
The reporter summed it up in a neat little sentence by saying that the
Internet depends on HTTP, and HTTP must be replaced because it is
broken.
Really?
The Web doesn't work?
In what universe could a reporter make such a statement with a straight
face?
Billions of people browse the Web every day, and they use HTTP to
do it.
You used HTTP to fetch this web page.
Has anyone ever looked at you and reminisced about the old
days before the Web broke?
Do they say, “I used to enjoy browsing the Web and wish I still
could, but I can't because it's broken”?
They might complain about a web site being unavailable or how
slow the response is on a given day, but “the broken Web” never
seems to come up.
What would motivate a reporter to say it is broken?
Maybe the reporter was assigned to write a story about a relatively boring
technical subject — a new version of HTTP — and decided to spice up
the facts.
Maybe the reporter thought that embellishing the story would impress the
editor.
So, has a new Web technology been created?
Surprisingly, no!
The article reports that a group is about to convene meetings to
start discussing the situation.
That's it.
The only thing that makes the meetings seem remotely newsworthy is
the absurd assertion that the Web is broken.
If we all agree the Web is broken, we will celebrate the small band
of brave engineers who are about to lead a crusade to fix it.
Everyone, please hold your breath waiting for the broken Web to
be repaired.
Most people believe the old aphorism: if it ain't broke, don't fix it.
With all that inertia, how can society move forward?
How can we be convinced to replace something that works?
The research community has faced the question for decades.
The traditional approach to research is both slow and painstaking because it
requires deep, thoughtful assessment to build a case for change.
A researcher must find out what has been tried, compare the new approach to
previous work, and construct a logical argument that justifies
change.
Furthermore, change is never one-sided — every change
involves a tradeoff.
Even if a new approach offers advantages, switching will take time
and effort.
Therefore, traditional research focuses on analyzing the tradeoffs.
The reporter has made an important discovery: we don't need to amass evidence, make thoughtful logical arguments, or think about tradeoffs after all. There's a quick, easy solution that avoids tedious analysis, skips the step of careful measurement, and justifies a change with no complications. Instead of proposing something new, just start with an assertion: what we have now is broken.
The approach is totally general, can be used with anything, even if the
current system seems to be working fine.
For example, once a reporter makes the assertion that the World Wide Web
is broken, the stage has been set.
The reader is mentally primed to hope for a solution.
Readers will be on the edge of their seats anticipating good news.
Surprisingly, readers do not seem to stumble over outlandish claims of
brokenness, even if their first-hand experience shows the claim to be
false.
It's magic: just declare brokenness, and people accept it.
The computer science research community has already started writing papers
that assert brokenness.
A few years ago, a paper submitted to a networking conference declared:
What should we do to fix the broken Internet? According to the paper, all problems stem from the fundamental design decision to use packet switching, and the paper asserts that replacing packet switching with something else will solve all problems. It's simple, twisted logic. The Internet is broken. The Internet uses packet switching. Therefore, systems that use packet switching are broken. Observe how a brokenness declaration rules out the status quo with absolutely no need for further explanation. The research paper didn't say what to use instead of packet switching — it didn't need to specify a replacement because anything will be better than the broken Internet we have now! And there's the secondary benefit: if the current system is broken, there's no point in making any comparisons between the existing system and the proposed replacement. Any change will be a step forward. The logic is convoluted. According to our axiom, the Internet is broken. Assume that broken things are not worth repairing, which makes a broken Internet worthless. Observe that whatever alternative comes along must have some positive value, which, by definition, will make it better. Here's the real beauty of the approach: a researcher doesn't have to discover a way to solve all problems because the new system will be better than the current Internet, even if the new system has a few flaws. This makes research so much easier.
Here's another advantage: the assumption of brokenness eliminates the need
for quantitative analysis.
Without brokenness, a researcher is expected to report quantitative
improvements, which means a researcher must measure both the previous and
proposed systems.
But starting with a broken system makes improvement meaningless.
There's no way to talk about an N% improvement over a broken system because
N% better than zero doesn't make sense.
Thus, researchers who use the brokenness approach can skip all the
annoying measurements and pesky quantitative analysis.
The cat's out of the bag — people everywhere are discovering that the
concept of brokenness can simplify all arguments.
The education community has caught on, and uses it during discussions of
K–12 teaching methods.
We find new (and sometimes fairly old) educational approaches justified with a
simple assertion:
Shouldn't we stop wasting time repeating brokenness assertions?
Why should research papers and newspaper articles
waste space on a statement about the brokenness of one
particular thing?
Why should political speeches waste time telling us
what we already suspect?
Let's agree to rise up a level and take a broader view.
We can sum it all up in a simple, easily-remembered axiom:
Once you understand the power of brokenness, you can use it to impress
your friends, family, and colleagues.
Suppose your friends are in a heated discussion about a Supreme Court
decision.
You can stop the discussion cold, just by asserting:
You may be worried that once everyone starts leveraging brokenness, the idea will become
trite.
Maybe every time someone mentions a new idea, they will preface it with a
claim of brokenness.
Don't worry.
Researchers have already devised a solution: brokenness 2.0, the implicit approach.
Instead of stating the brokenness axiom explicitly, the implicit approach
assumes everything is broken and needs to be replaced.
The key phrase used to invoke the implicit approach is:
What happens when everyone tires of the clean slate approach?
We can only “start over” a few times before someone will
declare: “We tried starting over, and it didn't work.”
Good news!
Some security researchers have devised a third solution: brokenness 3.0, the
proactive approach.
Proactive brokenness consists of teaching acolytes to break things
and then turning them loose on the world.
Teach them how to turn computer systems into attack systems: arrange a
thousand computers that spend hours trying a zillion passwords until one
succeeds.
Or pick a target computer, have a set of computers generate data larger than
the target can handle and use the data to crash the target.
Try anything else the user manual says to avoid.
Then, when something causes a system to crash, announce:
Readers familiar with software probably have the sinking feeling that the Software Engineering research community has been using a subtle form of the proactive approach all along. How else can we explain what happens? Every few years, they seem to produce a new programming paradigm along with an assertion that the old approaches don't work and a claim that the new approach will correct all the flaws of the past. Who led everyone astray by foisting the old, non-working approach on us? Should we believe them this time? Or did we forget? Software Engineering is broken.
But then again, we don't need more examples.
By now, I'm sure you agree: