Intro
"There's been nothing new in Computer Science since 1973. Change my mind!" -- Robin
If you are an engineer or scientist then this assertion may cause you some annoyance. Possibly severe irritation. After all, technology is changing at an accelerating rate so there must be a lot of new things.
The ball is in your court. 😂
For many years I've used this conversational gambit to initiate discussion on credit attribution, a topic that I take seriously.
The simple threshold I use for the challenge is whether the counterexample is surprising, whether it is made from whole cloth. Some will say that threshold is too strong, but it has been useful for surfacing the definition of "new" in people's minds.
Example
I recently shared this challenge with someone who suggested an algorithm used in neural network training. This is a mathematical method widely used today in AI work and is often attributed to a paper from 1986.
1986 is more recent than 1973, the last time I checked the calendar. I therefore conceded that perhaps this would be a winner!
Once I got back from the bar, I started researching. OK, Googling, not tramping through library stacks for physical journals.
The first search hit revealed multiple references to work published before 1973. Going in reverse order for the interesting ones, the references were from 1970, 1962, 1960, and then one from ca 1673, because math is old.
Analysis
Let's unpack this example by asking how the previous works are related to the idea under scrutiny.
The oldest reference in the suggested counterexample is to a well known mathematical operation published in 1673 -- the Leibniz chain rule. If the technique is "just" an application of the Leibniz chain rule, is it "new"? If the math framework was extended in some significant manner to get there, then maybe the answer is yes. But does such judgment depend on the size of the leap? How big does the leap need to be?
The 1962 reference is also a very well known publication that said, "hey, we need this thing" for this reason. It laid out what needed to be done and why it would be useful and important. But it didn't pose a mathematical solution. Does a solution to the problem then count as "new"?
The reference closest to 1973 is a 1970 master's thesis that, IIUC (I don't read Finnish), lays out the same math but for application in a different domain. Duplicate, nearly simultaneous discoveries occur all the time in science. But is duplicating or discovering a fully realized method in one domain and applying it to another "new"? How do you attribute credit?
This exercise is not designed to pass judgment on what people should think is new, but rather to illuminate how and why they think it is. Specifically:
❓ What rules are we using?
❓ How are they chosen?
❓ Are those rules an objective framework?
❓ How new is something that necessarily rests on one or more previous works?
You may find it surprising just how much the people who came before us have already discovered, imagined, suggested, or predicted. We just need to take the time to look.
The Challenge
For people still reading, I continue to look for a viable counterexample. Maybe sharing this challenge with a wider audience will result in a success. Consider this an invitation for you to reply with your findings, both positive and negative.
📐 What idea did you consider to be new, but upon research, you found otherwise? What did you find, and why do you no longer think it is new?
📐 What idea did you consider to be new, and after research, have not found evidence to the contrary? How hard did you grep the domain sources? Why do you still think it is new?
Clarification:
📐 Advances in hardware based in physics, engineering, or materials science I consider to be computer engineering rather than computer science.
Outro
Credit attribution for new ideas and findings is the currency that engineers, scientists, and academics spend on careers, equipment, and workspace. Given the current extent of our math and science knowledge, it is unlikely that anything we do will be constructed of whole cloth. We are standing on the shoulders of giants, and it is disingenuous of us to say otherwise.
So far, I've always found references for the proposed idea that occurred on or before 1973. To be fair, I don't think that my assertion is correct. But if someone provides a counterexample then I will change the target date and continue stirring discussion. 😜
Lastly, I do have what may be a viable counterexample. I won't share it here because I'm not sure if it passes the test. A theoretical mathematician that I discussed it with said that the core math basis was very old. That math is outside of my comfort zone so I am not in a position to judge, but I am happy to discuss it offline.
Or maybe one of you will help me by sending it in as a researched response to the challenge!
P.S. In case you are wondering why I picked 1973, this is one year after the introduction of object oriented programming paradigm by the Smalltalk programming language. Interestingly, since I first picked that year, I have found comments that the OO idea is from years or even decades earlier. I'll stick with 1973 until a viable counterexample is supplied. Because I'm lazy.