Archive for Software Development
One of the most popular saws in technology is the cry “That technology is dead! Now it’s all the New Thing”. Recently there was a big brouhaha over whether or not WPF was dead – replaced with Silverlight or HTML5. Arguments went back and forth through blog articles, twitter, whatever medium necessary all the way up to Scott Guthrie at Microsoft. What really struck me through all of this was the improbable standard used for what constitutes a dead technology.
Portraying tech as either dead or not is too simplistic to be useful. Instead, let’s consider a few states:
- New: Fresh technology still being fleshed out. This is the leading edge of advancement.
- Mature: Fully fleshed out and well understood.
- Deprecated: This technology is planned on being made unsupported or removed in the next environment.
- Dead: The technology has significant risks, is not supported and can’t be used on the latest environment.
In this model there are a lot of shades of grey between Not Dead and Dead. If you are building a big, customer critical system you probably want to be using mature technologies. If you have one of those systems and it’s based on deprecated items you need to put together a strategy to migrate off of it before it goes dead.
WPF: Still the new kid on the block
Many arguments about something being dead are really focusing in on just the first two states: Is it brand new and still a hotbed of development, or is it mature. For example, compare WinForms and WPF in .NET. The former is mature and the latter is still new. Why would I classify WPF as new, three years after its initial introduction?
- It was significantly updated in .NET 4.0, which shipped just five months ago.
- The ecosystem of controls and other libraries for WPF is still expanding rapidly.
WPF is probably just about to transition to mature. The ecosystem of controls is now large enough you can quickly develop most applications you might otherwise use WinForms for. This wasn’t true at the start of 2010, but the last six months has seen an explosion of new libraries from all of the usual players. Quite probably Microsoft won’t significantly mess with WPF going forward and that’s a good thing: It means you’ll be able to take what you write today and move it forward to the next few versions of .NET over the next decade without significant rework. Migration rework doesn’t add value to you or your customer.
WinForms: All grown up
Now, my friends that do Silverlight or WPF development are probably howling right now that WinForms is long dead & buried. After all: Microsoft doesn’t have a legion of developers working on it, and quite literally nothing was done in the last two releases of the .NET framework for it. But it just doesn’t qualify as either deprecated or dead:
- WinForms on .NET will be supported for a long time, quite likely as long as .NET and Windows are because it’s built on GDI+ which is foundational to Windows.
- WinForms is still being patched for security vulnerabilities and will be supported in the next major version of Windows.
So even though it hasn’t been changed since .NET 2.0, it’s still a completely viable platform to develop even brand new applications on. The fact is that a new application developed for WinForms will likely be officially supported by Microsoft as long as WPF, certainly through several more major versions of Windows. We’ve found that a number of our customers at Gibraltar Software and VistaDB are actively developing and starting new projects in WinForms because it’s a very productive environment. It has a large body of third party controls, the tools & techniques to develop for it are well understood, and in the end that delivers results.
Visual Basic 6: Dead Man Walking
If you want to look at a technology teetering on the edge of death take VB 6. You can run it on the latest environment (Windows 2008 R2 and Windows 7 both support VB 6, no joke!) but it isn’t being supported or patched even for security problems. That puts it hovering right between deprecated and dead. If I had to bet money, VB 6 will still be alive and kicking on Windows 8 but I would definitely not develop any new VB 6 applications and I’d be actively migrating away from them.
People Love a Winner
Developers are just people – and we love winners. This is the best car, that’s the best language, this is the best database. There is so much going on in software that it isn’t feasible for any one person to assimilate all of the options available, relate them to a problem at hand and truly sort out the one very best match between them. Despite this, we’re asked to do that all of the time: so anything that makes our job more complicated we want to fight. We don’t want two ways to make web applications, two ways to make client user interfaces, etc.
We’d prefer a nice clean line of replacement where new, cooler things replace older stodgy things. By this logic, WPF replaced WinForms which replaced MFC or something like that. Silverlight is a little of an odd duck, but sure – put it in the same line now that you can run out of the browser with it.
Get Results the Mature Way
Just because Microsoft doesn’t have an army of developers actively changing it and isn’t beating the drum about it in every press event doesn’t mean you shouldn’t use it. In fact, look at it another way: If Microsoft is still rapidly evolving a technology you should really think about whether you want to jump in. Since its changing you can’t bank on a body of best practices or an ecosystem of third party libraries. You’re most likely to run into platform defects in design & implementation, and most likely to be heartbroken when the next version of .NET breaks your clever hack or makes it completely irrelevant.
In the end, what are you trying to do? If you’re focused on getting results and solving real-world problems for customers, stick with mature. If you want to hang out with the cool kids and be a pioneer have fun with the latest new thing. Just know that it’s the pioneers that end up on the ground with arrows in their backs.
If you’ve read more than one or two articles from Reliable Systems you probably have gotten the sense that we worry a lot about how to make things just work. It’s that quality of anything where you get what you expect and what you need every time. It can be in an experience (like a fun drive down a country road) or a product. As a company if you can do this over and over you create a brand people develop a strong emotional connection to: Apple, John Deere, Starbucks…
When you want to create a product that just works, you need to get all of the details right – from packaging through to maintenance and upkeep. It’s not one thing that’s important, it’s all the things. We are often engaged by senior management within a client when things aren’t working, and there’s conflicting opinions on why. Usually along the path technology is being blamed: Not enough, not the latest thing, not someone’s favorite thing, not working. As we dig into the situation, rarely is the technology the dominant factor: More often, it’s how the technology is being integrated with the people and processes that all have to work together.
One of the first things we have to do in these engagements is to establish the real facts on the ground: What exactly are the problems in the system, who’s doing what with it, how many times. It comes down to establishing metrics to make sure time and attention are paid to the parts that make the biggest difference in the outcome. Armed with these facts in a form the business can consume it’s possible to create plans of action that deliver virtually regardless of budget.
So let’s make this easier
The biggest trick is then getting the facts you need on an ongoing basis, easily, and in a form that the business can consume. For over a decade we’ve been building instrumentation right into the systems we’ve worked on. We’ve created a variety of toolkits to make this easier over the years, refining them as technology and our experience has changed.
About 18 months ago we decided it was time to really invest down this path. We believe in routinely capturing key computer metrics along with whatever logging the application can do on its own. We won’t do a project without using a great logging system that includes a strategy for managing runtime exceptions. Now that we’re collecting all this data, we need to have a way of managing the raw data and turning it into valuable business data.
The challenge is that businesses don’t get up in the morning and say “what our customers want us to do is have great internal tools”, so you’re nearly always doing this on the cheap: Borrowing time from development projects internally to cobble together various free or cheap solutions. Frankly, we got tired of having to create new solutions with each client out of the margins of each project. So, we pooled our best thinking from all of the work we’ve done (including a previous product that we did license to our clients over the past decade called CLAS) and started creating Gibraltar.
Rock Solid from Initial Release
With Gibraltar we wanted much more than a log system. Of course, it had to be a log system too – and a really easy to use one that could work with each of our client applications. More than that, it had to:
- Automatically capture all of the performance metrics we wanted.
- Integrate with existing logging available on the platform, including whatever a client might already be doing (like custom in-house options)
- Be absolutely, positively, for sure safe to run in production no matter what. That means it can’t ever use too much disk space or disk throughput or block the application.
- Not use more than 5% of the performance of the app
- Include all of the tools necessary to get data from where it was collected to the people that could get value out of it
- Include the ability to look at the detailed session data up to high level analysis: What’s the error rate? What’s it correlate to? Are we doing better or worse in this version?
From this initial sketch into everything we wanted, we’ve spent 18 months including four beta periods (from 2-4 months each) to refine the vision with real customers and real scenarios. It was essential to us that this not be just a tool for techies but be ready for use by people with a wide range of skills. It had to be pretty and just do what you wanted, when you wanted it to.
We’ve added a lot of capabilities along the way: It can generate print-ready reports about application reliability that can communicate with senior management, you can define all kinds of custom metrics to easily track how your application is used and by whom. We ran a number of betas to be sure that we had hit every goal we have above. We’re happy to report that Gibraltar is in use within large deployments of custom applications, commercial applications, and small deployments right down to our corporate web site.
This tool isn’t for everyone – Our clients are nearly all Windows shops, and if they do any custom development it’s almost invariably in .NET, so that’s what we’ve targeted. But, if you’re interested in easily getting real data on not just infrastructure (how well the application is running) but whether or not it just works, have we got an easy path for you. You can see a quick demo video of how it works technically at Gibraltar Software.
You also don’t have to take my word for it at all, you can hear what one of our beta users did with it, which is really a more compelling story than what we might say.
I think you’ll find that our work sweating a lot of little details, from the exact design of the API and making sure the documentation was complete to rewriting our own licensing system to be very IT Admin friendly. If we didn’t get a detail right, we want to know. And the great news is that we’ve just begun: We’re obsessed with the little things, and you can bet we’ll keep listening and watching to make it better. Of course, this is made a lot easier because we’re using Gibraltar to monitor itself, and a select group of our users is sending that information back to us so we can make sure it just works in the field for real people.
It’s easy to start your journey
If you do development for Microsoft .NET, I’d encourage you to go over and download our commercial release of Gibraltar. You’ll get great documentation, a free agent you can use like a flight recorder “black box” in every application you create, and a trial for a tool that will make you seem wise beyond your years. And if you pay us the ultimate honor and purchase a permanent license, I can assure you that you won’t find anyone more committed to your satisfaction than we are.
- If you take a piece of wood and don’t sand the surface smooth it won’t take a stain evenly.
- If you let glue creep out of the joint and get on the wood, the stain won’t look right in that spot.
- A piece of dust on the surface will get magnified by each layer of varnish.
Each layer depends on what’s underneath it. Any flaw in a lower layer will tend to get magnified and distorted by layers above it.
Whenever I get involved in enterprise architecture I get reminded of this analogy because I often run into irrational exuberance that you can add a layer to an existing system and paint over the flaws underneath. I was involved in a few projects like that early in my career: It was too hard to talk directly to the mainframe from the web server so you put a layer in front of that. There was already a C++ layer doing a DCE RPC gateway, but that was also too hard to program against for large use so we added a COM interface to the DCE RPC gateway. We made some prototypes and validated the concepts and charged in full bore, only to run into big delays and teething problems near the end of the project trying to get everything polished up and suitable for production.
The problem is that at each turn you may be making the developer’s short term life easier by giving them an interface more natural to their preferred programming environment but since it just builds on the layers underneath it will end up with all of the limitations they have – and they can show up in the most surprising places. For example, we ran into problems where certain input would cause failures which we ultimately discovered was caused by % being used as an insertion marker in a gateway library several layers underneath what we were doing – so at best it would drop the % and the following character, but at worst you’d get back a random data element if you managed to create a valid insertion name.
Layering issues are particularly problematic because they tend to be data sensitive and highly situational depending on how the various layers interact. This means that it’s very difficult to design a comprehensive test plan: The system can act as if it’s nondeterministic, making it infeasible to state with certainty that the various modes of the software have been demonstrated by a test plan. At best, you can say that it worked for the exact test inputs it was given. When you do have a problem in production, the multiple technologies in multiple layers can make it particularly hard to debug because it requires a lot of chairs around the table to hit all of the possible players.
Are you decorating? Or covering up the problem?
Whenever you’re part of a team proposing adding a new layer over an existing system to fix its problems or adapt it to a new situation, you should be suspicious. Is this really the right path to make the API look right? Or are you temporarily covering over a problem? If it’s the latter, it’ll just show through later – and now you have two problems to deal with not one.
There are good occasions to add a new layer:
- To smooth technology upgrades: When you are shifting technologies, say from COM to .NET, you may want to create a custom layer as a new standardized interoperability adapter which will let you separate the upgrade problem into phases and handle them independently.
- To support multiple technologies: Sometimes you need to support multiple types of clients – varying either by environment (say Java and .NET) or major architecture (say Client/Server and Web sites).
And a few suspect ones:
- Mitigate architecture risk: To isolate a new subsystem architecture from the main codebase. We’ve heard this one before – you want to try out something new and iffy, like say Entity Framework. To contain the risk, you want to introduce a layer between it and the rest of the platform so if it all goes bad you can easily swap it out.
- Impedance Mismatch: You need to interact with something, but just don’t like the way it works. Perhaps it throws around ADO.NET recordsets and you prefer to work with strongly typed objects.
If you find yourself in one of the suspect scenarios, you should seriously question whether the work you’d do to create and validate a layer is really forward progress or just yak shaving. Before you go down the path, you should seriously estimate:
- Fixing the underlying problems: If the underlying layer(s) aren’t doing what you need, what would it take to get them changed (in the technology they’re currently in) so you could work without adding another layer? That puts the responsibility where it belongs, and keeps complexity under control. Do a full estimate of this approach.
- Make a parallel layer: If you ignore the powerful aversion to creating duplicate routes to the same data, what if you created an alternate path to the underlying information. It may be that you bypass all of the layers or just some of the layers (such as down to the stored procedures that call the database). While this creates duplication, it lets each platform work in their own optimal way and allows for deterministic testing.
- Using the existing layer as it is: It’s easy to overstate the impact of reusing a known system with issues. There’s a natural tendency to not realize that you’re comparing a well understood but flawed system with an unknown solution with unknown problems. Trading known problems for unknown problems makes everyone happy at the start of a project, but creates significant project risk downstream.
Put down the shovel and back away
When you create a new layer on top of existing layers you are often digging your project into trouble, both now and downstream. In addition to problems with each layer creating a leaky abstraction, deploying and supporting these highly layered systems is extraordinarily challenging. It becomes prohibitively expensive to make changes in lower layers because of the high chance of unexpected side effects showing up as defects in dependent applications. More often than not, each layer has to be held static with any changes accommodated by creating new queries or items at each layer to be served in parallel with the older methods.
Before you go ahead, be sure you look at the total lifecycle cost of that decision, including support and maintenance. Have a good or bad experience with putting up some software wallpaper? Let us know in the comments!
Hi. My name is Kendall Miller, and I’ve been an Internet Explorer user for 12 years.
I know it’s very uncool to admit – IE is only available on Windows, isn’t the most standards compliant, it has a bad reputation for security and there aren’t 1000 cool addins for it. And oh yeah, it’s from Microsoft – aren’t they evil?
But here’s the thing: From day one, it was the practical choice. Not because it was distributed with Windows, but because it worked. Worked as in it was easy to keep upgraded (thank Windows Update for that) it could do anything we needed it to (and yes, that used ActiveX) and it supported integrated authentication – so users could just point their web browser at our company sites and not get any login prompts, just get access to all the resources they needed.
Not only that, but IE was really forgiving. The thing that many people miss is that at the end of the day it isn’t about being standards compliant per se, it’s about the web browser just working. Put another way, while you want to develop using something really strict, when it comes time to hand it over to your users do what I meant, not what I said is best. Frankly, I was amazed at how tolerant IE was of HTML errors. In the end, do you think that users would say Thank you for not doing what I meant because it would have meant breaking the rule that you can’t wrap a div with an href? Most users I know would just rather it displayed the page.
Is the pace of IE development slower than Firefox or other browsers? Well, yes – but again, so what: businesses really don’t like change, change costs money. Change means retesting applications, and for what? It has to give them something to justify that cost. It isn’t like IE won’t remain a viable tool for browsing the Internet for some time. Remember, these are the same companies that are still (justifiably) writing applications in VB6, an environment that its creator has been trying hard to kill for 8 years.
So it makes a lot of sense that when it came to IE 8, Microsoft focused on what IE users really wanted. It isn’t standards compliance for its own sake, it was:
- Make it as fast as possible for how people browse the web today.
- Fix as many of the quirks in standards interpretation as feasible that make it easy to develop good sites for IE.
- Do what you can to make it hard to have a rash of new security problems.
- Don’t break any site that works today.
If you want a good feel for just how hard this problem is, Joel Spolsky did a great (but long) writeup.
But it’s no longer my default browser. While I’ve been trying to love it, I just can’t get there. I switched over to Chrome when it went into release and haven’t looked back. On the surface of it, this is a bit crazy:
- Many sites don’t work quite right with Chrome. I’ve gotten halfway through a shopping cart with Chrome and had the buttons not work to go next. Things don’t align right… Virtually every web app we use at eSymmetrix had to be patched to work with Chrome. Even WordPress seems to work better with IE than Chrome in some ways.
- Google is getting evil. They’re doing things Microsoft never could get away with. For example, every Google thing installs its own Google Updater service. It doesn’t ask if it can, I can’t find any way to get rid of them through uninstall.. they’re just there. I have no idea if they’ve ever updated Chrome because they’ve never asked. You can be sure that if Microsoft this they’d be in court faster than you can say Slashdot.
- It’s not really stable. About every other day I get the admittedly cute Aw Snap! page where Chrome just isn’t quite happy.
But it doesn’t matter, I still dramatically prefer it to the other three browsers on my computer. Why?
- It’s fast.
- I love the automatic 9 recent site dashboard with preview. I love how it handles browsing history.
- The document inspector is great for web development, so I use it all the time when developing web pages and style sheets.
- It’s the future.
That’s just Crazy Talk
Right now Chrome has around 1.5% of the total browser market share. That’s nothing – about what Opera and Netscape have put together, much less than Safari at 8%. On the other hand, IE 6 and IE 7 each have the market share of all the rest put together (more or less). So statistically, Chrome is completely irrelevant – and it’s had a lot of time to get to that point in market share. IE 8 cot to that point of market share pretty much as soon as it was released.
- Universal high bandwidth: Enough to move say 3mb in 5 seconds reliably.
- A visual Integrated Development Environment (IDE) that can bring it all together with a good debugger for browser and server.
We already have the first one. It takes about 6 mbps throughput – cable modem speed – to move 3MB of data in 5 seconds. Chrome brings in the second one (and actually bests that by some margin). Now we need the third and fourth.
So now that there’s an environment that can run it, we need a general UI toolkit and the IDE to develop with so we’re putting our time and attention into creating features for our users, not how to make a menu that dynamically expands and highlights. The IDE needs to provide an end user experience like developing for WinForms or WPF in Visual Studio – clean, easy, visual, without surprises.
I’m an engineer at heart. I worry about all the little details of how something works technically. When I can, I go for the overengineered solution every time. We recently needed to get a Microphone Pre-amp to USB device. Instead of getting the plastic MAudio unit that probably works just great I got the USBPre at twice the price. Why? Just look at that case, it’s awesome:
With a nice metal case like that, industrial strength construction – it’ll last forever! Of course, this thing will never leave my desk, so the ability to be run over by a truck is more or less academic.
So with my natural preference for hard core engineering I’d like to report that the best software comes from a group of driven software engineers. Technically, that may be true – a big group of engineers can make a very technically sophisticated product. But, really great products? Well, that requires a lot more than just technical excellence.
I think this is the backstory behind Vista’s successes and failures. We’ve been using Vista is our corporate OS since January of 2008, not long after it was widely available. It’s worked very well for us – even better since SP1. But again, we’re engineers: half of our systems are 64 bit, and we use high end hardware so we were very good candidates.
A Whole Lotta Polish
Last weekend I installed Windows 7. Now, even though I generally love new toys I haven’t been chomping at the bit to try out Windows 7 because Vista is working great for me, and we’ve had a lot of deadlines I didn’t want to risk. But, with the release of build 7100 last week, I couldn’t resist.
What’s the big difference between Windows 7 and Windows Vista? Polish. A whole lotta non-engineering polish. I was using the media center capabilities last night and noticing all of the little things that are completely irrelevant from an engineering / functional standpoint. These same things make all the difference in how you perceive the quality of the product and, more importantly the quality of the experience in using the product.
Is Build 7100 without issues? No – there are some optimization issues that I’ve run into, but they’re likely known already within Microsoft and they have months to refine them. The big picture is that the risky, time consuming design details are all there. I haven’t even turned off UAC yet, and I couldn’t live with that under Vista for more than two hours.
Now, it may be that if you’re creating the next version of SQL Server that this fundamentally human element of intuitive adjustment and polish isn’t as necessary. SQL Server could be all about hard core specifications, tests, and optimization. That’s reasonable when the human to product interface is either through a standard you can’t affect (e.g. T-SQL) or is confined to highly technical specialists.
Goes to Eleven
When you’re creating an application, you aren’t going to find the polish by reading a functional specification. You also aren’t going to get it just by using any particular development methodology – Agile, Waterfall, whatever. What you have to be willing to do is go beyond the written functional and system specification and look carefully at each aspect of the human – computer interface in your product.
This dedication requires a few things:
- Access to a User Experience (UX) / Human Computer Interface (HCI) specialist. These folks are experts not for facts and figures or things you can read in a book but their experience and practiced eye that lets them pick out the key details that make all the difference.
- Dedication to making it better: At each turn, and in very difficult moments, you’re going to have to repeatedly look at what you have and what you’ve done and say OK, how do we make this better. Take the case that we can leap beyond this, what would that look like?
Done right, this experience can be tortuous to engineers because it’s about iterating through hard to quantify, experimentally determined states without objective metrics to guide your process. You will see the results of your work – but as the sound of distant thunder as your users either rave more and more for what you’ve done or just accept meekly what you give them. Engineers are used to tweaking a knob and seeing the needle move in a quick, quantifiable way.
If you want to get a sense of what happens when people think deeply about how to create software that interacts well with people, read the Microsoft document on how to write an error dialog for Vista. This is 28 pages on how to do a good error message and why. Warnings? another 12 pages. Even if you’re a hard core engineer, some of the Vista User Experience Guidelines is a great read to understand why it takes many iterations and at least equal measure of instinct and intellect.
Fighting the Good Fight
The challenge with pushing for breakthroughs in the user experience with your product is that it doesn’t fit well into traditional engineering problem solving techniques. That may be why some of the most successful organizations at it have a strong command & control personality (like Apple) that emphasizing an individual making an intuitive judgment to decide what’s best. Trying to apply traditional engineering approaches will generally stifle and drive away the very talent that excels at solving these problems. Just ask Google. Their well respected expert on design and usability quit this year, saying:
I’m thankful for the opportunity I had to work at Google. I learned more than I thought I would…. But I won’t miss a design philosophy that lives or dies strictly by the sword of data.
The full text is an interesting read. Probably the most poignant example was testing what shade of blue should be used in a specific scenario. This is a good example of trusting your judgment, but don’t try to explain it. It’s a fundamentally human, intuitive leap and you might be able to rationalize it, but that doesn’t mean you can really explain it.
The best part is that if Microsoft is finally getting the message that it isn’t enough to just complete on business and engineering requirements but instead you have to battle for the hearts and minds of the people that use products it’s only good for everyone. Just like Linux has pushed Microsoft to be faster at evolving Windows (and creating more low cost licensing options), this may push players that are known for great design to have to up their game as well. I can’t wait.