Jeff Foster is a software engineer at Red Gate, and also happens to be the current Head of Software Engineering. Among other things, we talked about his first job writing code for furnaces, debugging using assertions, and why we tend to end up with complex, hard-to-change software.
Let’s start at the start: how did you get started with programming?
When I was little, I was left-handed, which is the key to my entry to the programming world. I’m old, so I used a fountain pen, and dragged my hand across the page whilst writing completely illegibly, so I decided to get a computer. It was an Einstein or something like that, a very old computer with a 6502 processor. It had this thing called MOS, which was like a REPL for assembly code. My first taste of programming was probably a bit of BASIC on the Einstein, and maybe entering cheat codes by poking around in memory. Good fun times.
And after that?
I graduated to PCs via an Amiga I guess. I seemed to skip the Spectrum and BBC Micro stage, which was a bit unfortunate. My dad used to work in the defence industry, and they would throw out their old computers rather than go to the expense of maintaining them, so dad just skip-dived and got me an old IBM PS/2.
Were you programming on the PS/2?
Yeah, I always did a bit of programming, mostly BASIC, probably because it was probably the only thing available. By the time I’d got to GCSE, I did a bit of TurboC++, because there were some old books around, but none of it was really taking it very seriously. They were just hacking up demos, scrolling my name across the screen in great big fonts, or writing the world’s worst shoot-em-up. My proudest thing was probably a Visual Basic editor for Premiership Manager ’92 or something, that allowed you to just load up the players’ records, and then edit their stats to 99, 99, 99. It was good fun.
When did you start programming in earnest?
I had school projects when I was doing my A-level in computing, not to be confused with computer science. I wrote some backend system for an estate agents which sounds really impressive, but my auntie happened to own an estate agents, so I could go in there and write some code, and it didn’t really matter if it didn’t work.
I think I managed to go all the way through university without writing a real program, as in something that’s independently useful for someone else. You have to wind quite a lot further forward to see that. Even when I did my PhD, that was just solving a problem for myself.
What was the PhD?
Gait recognition. We filmed people and the way they walk and then tried to recognise people given an arbitrary video of someone walking. That was pretty good fun but it wasn’t really lots of coding, it was me writing algorithms in MATLAB and so on, and maybe coding them in C# or C++ if I wanted it to go a bit faster.
And did it work?
Well, I got a PhD, which means it didn’t not work too badly, but it’s one of those things that only worked in the idealised case of filming someone walking perfectly orthogonal to the camera under perfect lighting with the moon in the right phase, so certainly not commercially viable when I was doing the PhD. I think it’s still going on now at Southampton.
Once you’d finished your PhD, did you leave academia or stay for more?
I worked out what I should do. Should I stay in academia and just sit in the ivory tower drawing abstract diagrams and writing bad code or should I actually try to get a real job? I went for a half-way house as a research scientist at a company called Sira. They used to be the UK’s equivalent of whatever the body is in France that keeps a metre ruler somewhere very safe and measures exactly one metre is. The UK equivalent of that had been going for a hundred years or so. They did some really cool shit actually. You’d better censor that [Of course I will, Jeff].
They did every kind of measurement you can imagine. For the two years I was there I worked on projects as varied as looking inside a furnace with a water-cooled camera, and trying to automatically adjust the gas-air mix to give the most potent flame. That was really fun, especially when you got to go onsite. They had an arc furnace, which is a very, very big furnace that is ignited with an incredibly big spark. The noise it makes can only be described as the gates of hell being opened. The crack as the voltage goes between two points two meters apart is terrifying.
I also worked on a thing called the smart ophthalmoscope. Ophthalmoscope, as well as being the most difficult word to spell in the English language, is the thing you shine in your eye to get photos of the back of your retina. We were trying to use a camera so your ophthalmologist could shine the camera around your eye and build up a full model of your eye. I was just technical advisor on that, linking together a few universities.
That was probably my first taste of writing real programs that were used by other people. And that was a disheartening experience in some respects. So there I was writing my code, and there was my boss, merrily rewriting my code, and that was first time that had happened. And it’s quite a difficult thing to get across, because you’re used to writing your own code being used for you, and then someone else is writing your code and making it do things that you don’t really agree with. I guess the first stage of the maturation of a software engineer is realising that other people read your code and that other people can change your code, and that other people can be better at coding than you are, and in fact not everything does work first time. So that was a good experience overall.
There were some really clever people there that I worked with in other disciplines. There was a guy who had been programming since the age of transputers in the sixties who was just about to retire. He was still coding away x86 assembler for JPEG decompression in real-time on hardware, it was just completely awesome. So I learnt a lot from that.
What made you decide to leave?
Decide to leave is strong… the company was going to go into the ether and disappear, so we started having those scary meetings where someone says “Hi everybody, there’s a slight problem with the pensions: we’ve worked out we haven’t got enough money”. Once a company has that kind of meeting, nobody wants to stay. I decided then that the thing I enjoyed more wasn’t the science, and it wasn’t the networking, because I had to do lots of sitting between industry and academia and hearing arguments from both sides. That wasn’t really interesting. What is interesting is building and making stuff. So I had the idea, “Hey, let’s become a software engineer. Let’s do it properly.”
I looked around for jobs everywhere and then eventually found a small company in Bristol called Dynamic Aspects that was after someone with ten years programming experience, and I had precisely two years of real experience. So off I went with a glorified, slightly fanciful CV. It wasn’t a lie, but it was certainly turning everything up to eleven. I managed to blag myself an interview with this tiny company. The interview was actually just working with people for two days. I found that awesome. We were whiteboarding solutions to something I’ve always been interested in, which is compilers and language design. It’s my favourite part of software engineering, but something I thought I’d never be able to get a career in because I assumed all of that clever stuff was done by big companies in Silicon Valley. I didn’t think I’d find it in Bristol.
It was a startup so there were only four other people on the team when I joined. One of those being the project manager, and everyone else was a vastly experienced software engineer who really knew what they were doing. In the few weeks before I got the job, I decided it was time to become a software engineer. One of the things I remember is sitting in the car waiting for my wife to finish a job interview, and reading the entire GoF patterns book from end to end. That really opened my eyes to the fact that programming isn’t just a bunch of if statements into vaguely short methods, which was what I took from programming before.
The first six months of the job at a small startup company were probably the most challenging of my life. The codebase of domain/j, as it was called, was very, very well written. Held to much higher standards than I’ve ever seen anywhere else, and there I was, not really knowing what I was doing, slightly changing the codebase so that it wasn’t perhaps as good as it once was and being chastised by my teammates for doing so. But it was very much a bootcamp style work ethic, submit code and then be politely and enthusiastically told that I was doing it wrong, and offered a better solution. I think after about six months, I was fairly confident I was going to go into work and not make any mistakes, which is a nice place to be from worrying every time that I’m going into work and I’m going to break the code. That company carried on for a couple of years, but it was probably more limping along after the first year. We were writing a refactoring tool for Java, and this was around about the time IntelliJ and Eclipse were starting to come out, so there was a gap in the market at that time for a refactoring tool. But we were slow to execute. We had some really unique ideas back then that are really starting to see the light of day now in things like Light Table, which is the Python, Lisp-y dynamic execution environment for code. So it was sad to see that company fade away over the last year that I was there. And then it was onto another job. And probably the first time that I’d had to move job even though I didn’t want to
You mentioned that they had brilliant code yet were too slow to execute. Do you think that those two things were related?
I think it was more the marketing side of things, as in we didn’t have one. We weren’t trying to sell a product at all, we were going to quite academically themed conferences like OOPSLA and presenting our work. We got the right people interested, so we had people from Google and Eclipse talking to us, but I don’t we ever did anything ballsy enough at that stage. What would I have done differently back then? I suppose I’d have tried to launch a demo on the Internet, no matter how barely functional it was, and I’d try to get people actually interested in it. We did less of that. We tried to get the whole solution done, and then say “Hey world, here’s this awesome new thing”, but we didn’t really validate that the world wanted that.
So I don’t think great code was a thing that stopped us, if anything that was a thing that helped us. When we were doing presentations at conferences, if we had a conversation with someone about a cool new feature, we could sit in the hotel room and hack that feature together without polluting the rest of the codebase. It might not work, but it wouldn’t take the rest of the application down with it. So the great code provided us with a fluidity to change things, it was the marketing that didn’t seize the opportunity.
After that, there were only a couple more jobs before I came to Red Gate.
Outside of work, do you have side-projects?
Computing-wise, I have a blog, I try to do stuff with it, I fail. That about sums it up. It was fairly active a couple of years ago, and then I had another child, and that scuppers updating the blog. Hopefully, I’ll get something going again. I do quite enjoy programming in my spare time. Hopefully I can persuade my children that programming is an interesting thing to do, but they’re a little young for that now.
Changing subjects slightly, how do you like to debug code?
Everyone understands what code should do when it’s being read, unless it’s that bad. If it’s that bad, if you can’t look at a lump of code and understand what it’s supposed to do, you need to refactor that before you go any further. So, the stage I like debugging at is where I have the code and I have a mental model of how that works, and I have the runtime behaviour, and the two don’t match up. I’m getting an exception, and I usually find that’s because I’ve misunderstood the way my code is working with the outside world. So it might be something as simple as I’ve asked to get the length of the string, and if I give it a null it returns -1. That’s the sort of thing that might not be apparent from the function signature but is still reasonable. So when an API is doing something that you don’t understand, you can’t reason about it from the code.
The way I like to try and reify my mental model is to start littering the code with pre- and post-assertions. A huge percentage of bugs are some state has got wrong and you get deep into some function, and you get, say, a null pointer exception. My mental model of the code would say this shouldn’t be null at this point, so I’ll stick an assertion in at that point and run the code again. If the assertion fails, I need to move my search to the next layer out. And I usually keep those assertions in after I’ve finished debugging. I find by pushing the assertions out, you start in the middle of something but you can’t see where the problem is. So you add assertions, which means all of the callers of that function that have gone hideously wrong now validate their pre-conditions and post-conditions. And by extending the search outwards, I usually find the bug. I won’t say always, it sounds pretty idealised, but it works more often than not.
So you prefer to try and put your assertions in rather than using a symbolic debugger?
The symbolic debugger is great, I can go in and I can see the state of everything. That’s fine when it’s just one level of indirection, but not where some wrong value is used is far from where it’s been set. I think the symbolic debugger is useful for inspection at every stage of pushing out assertions, but very rarely do I have an “aha!” moment in a debugger.
In a similar vein how do you begin to make sense of a new codebase?
Do something with it. I don’t think you can just understand a codebase by just staring at it as some inanimate object. First of all, doing something is building and compiling it, can I get the damn thing working? And usually even that uncovers some interesting things. You compile a project, and you discover you need to install this SDK or it uses that SDK, I wonder why that is. Then running the app and seeing it how works tells you a lot as well. From the look and feel of the web app, I can probably have a guess at the underlying technologies that it’s using. Is it using JQuery UI, is it using ExtJS? The next step to understanding it is to try to do something with it. Usually the context in which you’re understanding a codebase is you’ve moved team or because you’d like to add some new functionality. So it’s usually quite easy to find a reason to do something to the codebase. It might be as simple as adding a button to the UI, which is usually finding a string in the UI that looks memorable and searching for it in the code, looking up who calls that. Play with it, add a button, can I hook in some events, can I trigger something?
If I were working on this codebase that I was going to give you, what would you ask me to do to make it easy for you to understand it?
I’m quite fussy about code. There’s a basic standard of micro-complexity that I expect every codebase to have, and that means not doing stupid things. ReSharper will usually highlight stupid things like “expression is always true”, or “Boolean statement can be simplified”. When I look at a method, I don’t want to have to think “Why is it doing that?”, it should just be obvious. It shouldn’t say if (a == true) return true else return false
, because that tells me you don’t care about your code. I want someone to have cared about it so that there’s no code to take away because it is the essence of the problem. It’s about having exactly the right amount of complexity, no needless complexity. Use the right design patterns, use the obvious things, don’t subvert the type system with your hideous casting, make the code say exactly what it does. Quantifying that is really hard, and it’s something that I’ve tried to get people at Red Gate to do before by asking them what great code is. The overall thing is everyone knows what great code when they look at it, at least on the micro-scale. If I look at any one method, I should be able to tell that someone cares about that.
So if you’re giving me code, I want you to care about every method at the minimum. I think once you’ve done that, I’m actually not so worried about the global structure. If you care about individual methods, then that makes refactoring things much easier. Ideally, of course, everyone would have a nice design, but sometimes those things are only easier to see in hindsight, so what’s more important is to have a malleable codebase where there’s nothing daft in it, so it’s free to refactor, and then you can bend it to how you need for the next stage.
Speaking of complexity in source code, Brooks [author of the Mythical Man Month] made the distinction between accidental complexity and essential complexity.
Some things are complicated. If I’m solving the travelling salesperson problem in polynomial time, it’s fairly unlikely that there will be a simple definition for that. Chances are that I’m going to have to do something clever algorithmically, but there are very few things that require clever algorithms. We tolerate far too much complexity as software engineers. Most code should be much, much simpler than it is. I think that’s what Brooks is going on about with accidental complexity, where we make our own problems by not checking assumptions. Maybe we go to vast efforts because we think that we’re going to have integers that are 65-bits long rather than 64-bits, but it turns out that it’s a detail that doesn’t actually apply to us. We don’t question the assumptions that people give us strongly enough, so we make massive amounts of work for ourselves.
How do you try to manage that complexity in the large?
To manage complexity in the large, never get complexity in the large! Never set out to make a large system, never write a massive system. Write lots of small ones that can be connected together via well-known APIs. But don’t ever set out to build one massive system. I think that’s always the wrong approach. You can always build it fractally from smaller components. It’s kind of the self-similarity principle. A class is responsible for taking some information in and performing some processing, similar to a method but at a higher level. A whole package works towards a similar goal, it’s responsible for taking this whole family of objects in and giving this family of objects out. An entire program is responsible for one part of the domain problem that you are solving. And then again, up one level to the system scale, and that whole system is responsible for a very simple thing. The very simple thing would be the business problem you’re solving.
This sounds wonderful to me!
Yeah, it doesn’t happen though, does it?
Why do you think we tend to end up with such complex systems?
I think as engineers we’re despicable people. We don’t try hard enough, we just wave the white flag of surrender and just hack something in. We should be standing up for ourselves and doing things right, even if we realise that takes a bit longer. Everyone wants to write the right code, but it’s seemingly a vanishingly small amount of people actually have the gumption to do it. By gumption, I mean: if someone asks how long is it going to take to do something, don’t tell them how long it’ll take you to do it quickly and hackily and dirtily, tell them how long it’s going to take to do it right. If you give your project manager the option of “It’ll take it two days if I hack it together or a week if I get to do it right”, they’re going to tell you to take the two days, and that they’ll find time in the backlog to tidy up. They never will, and we all know it, but we all perpetuate that lie, and we keep taking the shortcuts. It’s very frustrating.
Do you think teams feel the effect of those shortcuts immediately, or over time?
I think the problems build up. So I’m going to use my favourite dung beetle metaphor. So every time I write some bad code, I leave a taint of that in the codebase. A lump of dung, if you will. And as I add more and more of that, that dung ball is getting bigger, I am the dung beetle pushing around this dung ball. And it takes ages for the amount of technical debt or crap in your system to build up. You put a little bit over there, a little bit over there, it’s going to take a while until they meet up and the system is hard to change. At that point, you’re dead. I don’t see how software continues from that point onwards. In a previous job at an enterprise software company, some of the core software is ten years old and it’s always been built on the idea that if customer X is asking for feature Y and giving us money, let’s do it as quickly as possible and get that money. It’s worked really well for ten years, and the company’s made an awful lot of money. But now that codebase is like cement. There’s thousands of config settings that are effectively “if customer1 then enable this completely different code path”. The code is impossible to maintain, you can’t change anything without some random other part of the system breaking. I don’t see how a software company ever recovers from that. There’s a Spolsky article on the death of Netscape. I’m not sure whether it’s for technical debt or for some other reason, but they decided that their only option was a rewrite. Well, if you reach that point, your company is dead. I think software does just die, it reaches the point where the technical cost to make a change with any confidence is so high you’re dead in the water, you can’t add new features faster than your competitors, you’re just saddled with an impossible codebase whose developers have left and I genuinely believe that’s the end of the company. I’d love to hear if there any successful examples of old code still working today. Maybe that’s why Microsoft is good, maybe the Word codebase of 1985 has evolved right through to Office 2010, maybe that is an example of a successful project that’s gone on over time.
Or perhaps the Linux kernel?
Yeah, if you look at the commit process for that, you can see why Linus has kept an iron grip on it and enforced very, very strict coding standards, going through a tiered review phase. I think Linus still reviews the majority of commits that go into the kernel. He’s even written a version control system to make this process easier. I think you have to enforce those really high standards to keep a codebase going for the long term. I don’t think the majority of companies do that. I think most companies focus on the short-term aim of 3-5 years, become absorbed into a larger company, and then think “job done”. The company has made its money.
How do you create an environment with that long-term thinking?
There’s a fancy business principle where companies base themselves on the image of their founders. At a successful software company, at the head of a company, you need someone who understands what successful software is from an implementation point of view. Something more like a Google than an IBM. In the good old days of Microsoft with Bill Gates, who is by all accounts a very strong software engineer, I think the company rallied around that image. On the other hand, Oracle has Larry Ellison as their CEO who’s just regarded as being Satan. Larry Ellison, Prince of Darkness, I think is his nickname. He’s certainly not a software engineer. I don’t whether or not I’m being unfair, I don’t know Oracle all that well, but I get the impression that their software isn’t held to the same high standards that Microsoft’s or Google’s is.
Do you think it’s ever possible for that feeling to come from the bottom up?
I think you’d have to have some help from the top. I don’t think you can ever appreciate how difficult software engineering is unless you’ve been in that situation yourself.
How do you think your style of programming has changed over time?
When I left university, I could program perfectly! I was the only one ever running my own code, everything was fine. I might need to rewrite it a hundred times, but generally when it worked it was pretty sure to work. Then I got a real job and realised that it had to work all the time, and when other people were looking at it, they had to have a bit of confidence in it. So that was when I learnt other people are reading your code, it needs to be good rather than just work. And it needs to have tests so that I can verify that it worked then and it still works now. And more recently, I do a lot of functional programming in my spare time, because that’s the right way to write code. I don’t do as much mutable state as I used to. I prefer static methods over object-orientation because a static method takes stuff in and gives stuff out. It’s very, very easy to test.
I guess I did have a small phase of being a Gang of Four patterns devotee, but I soon kicked that habit and learnt either to do things in a language that supports the concept as a first class thing or just to recognise the reason I’m doing the visitor pattern is because this language doesn’t support double dispatch. It’s easy to become enamoured by patterns, and think that’s the solution to everything, but there’s a stage in your evolution where you realise that they are just tools to fix kludges in the language, and as your programming language evolves, so do the patterns you use. A good example would be C# delegates and anonymous functions rendering quite a few of the patterns in the book meaningless because you can pass first-class functions around. So, I guess as languages have evolved I’ve used patterns less.
Is it about striving for that essential simplicity?
Yeah, I want code that does what it says. I spent some time with JBoss which is exactly the opposite, it’s just hideous, you can’t look at the code and tell what it does, because it’s buried in an XML configuration file somewhere else. That’s why I don’t like aspect-orientated programming either. If I look at a lump of code, I should be able to tell what it does. If it’s got a before and after aspect that’s woven into it at compile time, I can no longer reason about my code locally. I guess that’s somewhat similar to the feelings when I see overuse of polymorphism. When most people do object-orientation, the two things they overuse are implementation inheritance, using inheritance to share code rather than to model an “is-a” relationship, and polymorphism for much the same reasons. I find that code very difficult to understand when it’s used wrongly. You end up with super-generic code that you have to go up and down the hierarchy to find the appropriate implementation, whereas if that was just an algebraic data type in something like Haskell, it would be in one function definition.
Speaking of tests, just how crucial do you think tests are to the life of a software engineer?
I think there’s a balance there. If I’m writing something in a dynamic language like JavaScript, the code is just a series of symbols on a page that may or may not do anything. In that case I definitely need to have tests to validate that what I’ve typed in isn’t gibberish. Moving up the scale slightly to languages like C# and Java, the code that’s written down is at least compiled so that I know some basic properties of the program. But the type system isn’t very rich, so I only know a small subset of the properties, and I still have to test that the functions do what I want. As the strength of the type system increases, I think the important of testing decreases. But you still need both. I’d argue for a dynamic language you’d want a lot more tests. Java and C# are probably a middle ground. You still want a lot of tests, but the type system can tell you something. And if you move up to Haskell, the type system tells you much more than it does in Java. I can’t get null pointers, for example. And going all the way to the top with languages like Isabelle and Coq, I don’t need tests of course, because it’s a statically verifiable property that my program works, assuming the specification is exactly correct! So, you’d still need tests even with a fully verified program.
In that ideal world of languages with rich type systems, what do you think the role of testing is?
I really like the invariant style of testing. For example, I might assert an invariant over an operation that the number it produces is always bigger than the ones its supplied with. If I’m raising two to the power of any positive integer, I always get a bigger number than I started with. I think the style of checking where I generate something and say that it satisfies this constraint when running this program is amazingly powerful. An example might be model-based testing. If I’ve written a search engine, it might be very complicated under the hood, but it turns out that I can model it very simply. Say I’m searching for numbers satisfying some criteria, I might use just a list and compare it against my very complicated search engine. With the model-based testing, I can generate an ideal answer of what I expect and test that against the real-world implementation. That’s when you can find the really interesting things where your model is slightly different from your specification or you’ve misinterpreted something. And those are the kind of interesting bugs that are really hard to find. I don’t think you ever find those kind of bugs when writing a unit test because you’re understanding the same specification that you’ve just implemented. It needs to come from somewhere else. Rather than me challenging the code to find a mistake, I pass that off to some external automated thing to that for me.
Do you think programmers are going to move towards the higher end of the spectrum, towards languages like Haskell?
I don’t know. I’m not sure where I’d put software engineering in terms of how it’s evolved over time. Actually, we haven’t evolved over time. The problems that Fred Brooks talks about back in the day are still the problems that we talk about now when we talk about how systems work. That doesn’t exactly give you a lot of confidence for the future. We’re in another messy part of software engineering development with the web. There’s very loose definitions of how things interact: “Oh you just send some JSON over here”. Nothing formal has evolved yet, the industry seems to be resisting all temptation to do so, presumably because of the mass of existing stuff that’s out there. I don’t think that I’ll ever see software engineering becoming more concrete and a more predictable profession.
Arguably, the web and sending JSON has gotten us further with code reuse than object-orientation did.
We’ve got code reuse, but we’re at the stage where the messages that are sent between components are very poorly documented. There’s nothing to verify that you’re squirting the right thing statically. It’s going to happen dynamically. That, to me, is a bit painful. I guess from a pure software engineering point of view, I’d like it all to be statically verified. The earlier I know that I know my program works, the better. I do see it from the other side, I want to be able to add new features rapidly, I don’t want to have to worry about versioning my interface. I can see it both ways, but ultimately I hope that things mature to the point where we can plug together components with more confidence that they’ll actually work.
On the spectrum from JavaScript to Haskell, is there anywhere that you feel most comfortable?
I’m probably most comfortable in the C#/Java world. I’ve spent more of my career doing that, but I recognise that functional programming is of increasing importance, and it’s a little bit outside of my comfort zone. It was really far outside my comfort zone, it’s less far now. There’s stuff to learn on that frontier. I still have stuff to learn for Java and C#, but the rate at which I learn stuff is much higher in Haskell land rather than C#-land. So, I’m more comfortable in C#, but I’d still rather learn new things in Haskell.
Do you think there’s any middle ground between Haskell, which often feels very different, and C#/Java?
There’s things like F#, which I haven’t used, but is a way of targeting the Microsoft .NET platform with a language that isn’t quite as strange as Haskell. That might be a gentler introduction. And there are concepts from functional programming leaking into Java and C#, such as first-class functions. Haskell and similar languages push the boundaries, and eventually they push them so far that what was once considering extreme Haskell is now sensible C#. Things like the C# contracts, which give you the compile-time verification of null pointers and so on in C#. It’s a little island in C#, and it’s not very useful yet, but maybe in a few versions it will be.
Do you think we ought to take that Haskell idea of controlled mutability and push that down into C#?
I guess the communities will have to evolve conventions to do that first. As more multi-threading happens, it’s going to become clearer we need to distinguish between a pure function and one that pokes around a bit on the side because that poking around on the side is what gets you into trouble. Maybe people start to use a method attribute to signify that something is a pure function, maybe a consultancy like Thoughworks starts doing that on their projects, and it becomes genuinely useful, and in years to come, there is a call to make this a core part of the language. I don’t think it’s going to happen quickly, it’s going to need to come from the language community and bubble upwards into the specification.
We’ve touched on how languages might change, but how do you think programming as a whole might change over the next ten years?
Slowly. I think JavaScript will still be the language to use for web-based stuff. I think there’s a really exciting opportunity now on desktop software. Java is going away, Sun has disappeared, Oracle are doing all the wrong things, wrongly, and there’s a general community consensus that Java is going away. Maybe not consensus, but you see the Clojure community springing up, you see Scala, you see influential consultant companies talking about all the new stuff that’s coming along. Java isn’t really there, whereas C# is still moving along nicely in terms of language innovation. So I see a real chance that some new language will spring up. And it might still use the Java Virtual Machine because that technology is excellent, but it will be able to react quicker than Java. I’m not going to make any predictions about what that language could be, but I think that in ten years, there will be a different competitor to C# than Java. And I’m quite excited to see that happen.
I think we’ll move towards more hybrid programming language styles. I don’t think we’ll move over to functional languages, I think we’ll try and take the best of functional and give an object-orientated twist to that. There’s a meme floating around about systems design which is object-orientated on the outside, functional in the middle. I think we’re seeing more designs based upon that pattern where your business logic is written in functional terms, and the way you interact with the system is written in object-orientated terms. And I think languages will evolve to support that.
If there was a single idea that you could tackle in programming or in computer science, what would it be?
I think code should be live. At the moment, you interact with code quite statically, and that’s not very interesting. We have compile-time relationships between two objects and very rarely do we get that wrong. What we tend to get wrong is the dynamic nature of objects, how they fit together. At the moment, the only way to explore that space is via a debugger, or a test. What I want to see is that reified into a real, living organism. I want to have my code editable, not just at compile-time, but at runtime. I don’t just mean simple things like Java’s hot-swapping classes, I mean I should be able to poke around my code, reorder it, move the instruction pointer back, reshuffle the code, view it from a different perspective. I should be interacting with my code as if it’s a living thing, rather than a series of symbols on a page with some tests that I run. I think that’s the big thing. Light Table has some of the ideas that I’m talking about there, but I don’t think it goes anywhere near far enough. It should be a live, always-executing model of some code that I can change on a whim. It’s about shortening the feedback cycle. It is really hard because code interacts with the external world, like input/output and so on. There’s things can be done in there, and it’s a space I’d like to explore.
Load comments