Thanks to everyone for coming to my sessions and the organizers for making the event run so well. The facility was great and it’s really quite remarkable that the community can have such a strong one day training event without having to charge participants. Microsoft was there to help out, but it was clearly a training event for the community not a Microsoft press event, exactly as it should be.
We had fun talking with the other vendors at the show, most notably Component One and the folks from CapTech. This is the first time we’ve had a vendor booth at an event like this, so in many ways it was a dry run to get the hang of what it’s like. We gave away a full copy of Gibraltar Analyst as well as a year’s subscription to Hub at the conference, and got to talk with a lot of people about both Gibraltar and VistaDB. We got some great real world examples of where VistaDB fits, which is a big help as we work on the marketing for that going forward.
I presented two sessions -
A Year in the Life of an ISV
If you’re thinking about what it’d be like to ditch your corporate development job or consultant gig and strike to create & market your own product (Or you’re a consultancy looking to create a product to diversify) this presentation shows what to expect on the path from shipping your first version to business success.
This one’s always a little risky at a code camp because, well… there’s no code. But, with the incredible diversity of tracks that were available at Philly Code Camp (13 tracks, over 60 sessions…) I think it’s also good to be able to “take a break”. Next time I might go for the last session of the day to maximize the value of that.
Designing APIs for Others
I covered a range of real world lessons about commercial API development emphasizing the differences between in-house & internal development and great, reusable commercial libraries.
I got some great feedback on this talk, particularly on an example that broke my own rule about samples: I tried to over-simplify it and instead created a “not best practice” sample. I’ll fix that for next time!
If you saw either presentation, please be sure to fill out the conference evaluation and I’d love to hear your feedback – drop it in the comments below or send it to me directly at email@example.com.
If you’d be interested in having me come talk at your code camp, .NET Users Group, or event – please reach out and let me know. I’m always looking for new & better ways to engage with the community.
One of the most popular saws in technology is the cry “That technology is dead! Now it’s all the New Thing”. Recently there was a big brouhaha over whether or not WPF was dead – replaced with Silverlight or HTML5. Arguments went back and forth through blog articles, twitter, whatever medium necessary all the way up to Scott Guthrie at Microsoft. What really struck me through all of this was the improbable standard used for what constitutes a dead technology.
Portraying tech as either dead or not is too simplistic to be useful. Instead, let’s consider a few states:
- New: Fresh technology still being fleshed out. This is the leading edge of advancement.
- Mature: Fully fleshed out and well understood.
- Deprecated: This technology is planned on being made unsupported or removed in the next environment.
- Dead: The technology has significant risks, is not supported and can’t be used on the latest environment.
In this model there are a lot of shades of grey between Not Dead and Dead. If you are building a big, customer critical system you probably want to be using mature technologies. If you have one of those systems and it’s based on deprecated items you need to put together a strategy to migrate off of it before it goes dead.
WPF: Still the new kid on the block
Many arguments about something being dead are really focusing in on just the first two states: Is it brand new and still a hotbed of development, or is it mature. For example, compare WinForms and WPF in .NET. The former is mature and the latter is still new. Why would I classify WPF as new, three years after its initial introduction?
- It was significantly updated in .NET 4.0, which shipped just five months ago.
- The ecosystem of controls and other libraries for WPF is still expanding rapidly.
WPF is probably just about to transition to mature. The ecosystem of controls is now large enough you can quickly develop most applications you might otherwise use WinForms for. This wasn’t true at the start of 2010, but the last six months has seen an explosion of new libraries from all of the usual players. Quite probably Microsoft won’t significantly mess with WPF going forward and that’s a good thing: It means you’ll be able to take what you write today and move it forward to the next few versions of .NET over the next decade without significant rework. Migration rework doesn’t add value to you or your customer.
WinForms: All grown up
Now, my friends that do Silverlight or WPF development are probably howling right now that WinForms is long dead & buried. After all: Microsoft doesn’t have a legion of developers working on it, and quite literally nothing was done in the last two releases of the .NET framework for it. But it just doesn’t qualify as either deprecated or dead:
- WinForms on .NET will be supported for a long time, quite likely as long as .NET and Windows are because it’s built on GDI+ which is foundational to Windows.
- WinForms is still being patched for security vulnerabilities and will be supported in the next major version of Windows.
So even though it hasn’t been changed since .NET 2.0, it’s still a completely viable platform to develop even brand new applications on. The fact is that a new application developed for WinForms will likely be officially supported by Microsoft as long as WPF, certainly through several more major versions of Windows. We’ve found that a number of our customers at Gibraltar Software and VistaDB are actively developing and starting new projects in WinForms because it’s a very productive environment. It has a large body of third party controls, the tools & techniques to develop for it are well understood, and in the end that delivers results.
Visual Basic 6: Dead Man Walking
If you want to look at a technology teetering on the edge of death take VB 6. You can run it on the latest environment (Windows 2008 R2 and Windows 7 both support VB 6, no joke!) but it isn’t being supported or patched even for security problems. That puts it hovering right between deprecated and dead. If I had to bet money, VB 6 will still be alive and kicking on Windows 8 but I would definitely not develop any new VB 6 applications and I’d be actively migrating away from them.
People Love a Winner
Developers are just people – and we love winners. This is the best car, that’s the best language, this is the best database. There is so much going on in software that it isn’t feasible for any one person to assimilate all of the options available, relate them to a problem at hand and truly sort out the one very best match between them. Despite this, we’re asked to do that all of the time: so anything that makes our job more complicated we want to fight. We don’t want two ways to make web applications, two ways to make client user interfaces, etc.
We’d prefer a nice clean line of replacement where new, cooler things replace older stodgy things. By this logic, WPF replaced WinForms which replaced MFC or something like that. Silverlight is a little of an odd duck, but sure – put it in the same line now that you can run out of the browser with it.
Get Results the Mature Way
Just because Microsoft doesn’t have an army of developers actively changing it and isn’t beating the drum about it in every press event doesn’t mean you shouldn’t use it. In fact, look at it another way: If Microsoft is still rapidly evolving a technology you should really think about whether you want to jump in. Since its changing you can’t bank on a body of best practices or an ecosystem of third party libraries. You’re most likely to run into platform defects in design & implementation, and most likely to be heartbroken when the next version of .NET breaks your clever hack or makes it completely irrelevant.
In the end, what are you trying to do? If you’re focused on getting results and solving real-world problems for customers, stick with mature. If you want to hang out with the cool kids and be a pioneer have fun with the latest new thing. Just know that it’s the pioneers that end up on the ground with arrows in their backs.
It’s very popular to consider the internal users of IT services as customers, acting like IT is an in-house service provider that the rest of the company purchases services from. The goal behind this is usually a reaction to a real or imagined belief that IT isn’t being responsive to the needs and budget of the rest of the company. The thinking goes that by having IT think of the rest of the company like an outside organization would of its customers, you can ensure better accountability and buy-in. Typically, organizations that go down this road also adopt a charge-back model where the IT organization charges back all or nearly all of its costs directly to the other divisions within the company that are consuming those services.
While there are several positive aspects that can come from this approach, there are several problems that can easily be created that stem from the problem that in most cases the rest of the company really isn’t a customer in the classical sense. Why? Because they lack a true buying choice. Furthermore, it generally isn’t in a company’s overall best interest for their divisions to really be customers of their own organization.
The original motivation for taking these approach is usually to address several issues:
- Division buy-in on costs and priorities: If they are directly paying the bill, they are going to pay for what they want and not ask you for things they aren’t willing to pay for.
- Clear status and communication: The project reporting and communication model is simpler for everyone to get their head around if it’s based on something we’ve very familiar with. Each player can figure out their part.
If you model the relationship between the IT organization and the rest of the company as a service provider – customer relationship, it’s easy to miss the transitive qualities of this: if they are your customer, you are their vendor. The word Vendor casts things in a different light: If you’re a sufficiently large organization you probably have a vendor management office whose sole job is to ensure you pay the least you can for things and fosters competition between vendors. Their job is largely to keep the company from getting too cozy with any one vendor. Are you ready to be just another vendor, like the one that bids annually to supply fresh coffee or office supplies?
There are several good things that this model will tend to create.
- Defensible functional requirements: Unreasonable requirements tend to be expensive relative to their value, and the division is more ready to discard them.
- Role Clarity: The Vendor/Customer relationship is relatively easy to understand, and each party can generally determine their role quickly. When there are disputes, there’s a natural framework for resolution.
Challenge One: Buying Choice
It isn’t a long road from treating your internal divisions as customers until they look at you as a vendor. Once they consider you just another vendor (like the one they selected to provide fresh coffee to the office, or office supplies) they’ll want the advantages that come along with being a customer. For example, it’ll seem clear to them that it should be optional to use your services. This will feel very reasonable to upper management – it’s all part of the transitive nature of IT being accountable. If IT can’t deliver a service at the best price, why not go to another provider?
This will likely start with something that will be difficult to argue against – such as a large software development project, perhaps in a language that your in-house talent isn’t familiar with. Now, what about hosting for that product? If you are charging back true costs for your data center to each division, you are unlikely competitively priced with what a division could get from Rackspace or Peer1. It isn’t necessarily that those companies are more efficient than you are at doing the same thing (indeed, if they are then you should broker your own contract with them) but instead that it isn’t an apples-to-apples comparison.
Challenge Two: Implied Requirements
Whenever an internal IT organization takes on a project, there are a number of implied requirements that affect cost and schedule. Some of these requirements are from the IT organization itself (like technology choices) and others are from the corporation (role of internal staff and contractors, project management and reporting standards, etc.). When a division looks to bid out work to an external source, these requirements are usually unstated because in many cases they aren’t requirements the division has on the solution.
Another way to look at it is that any constraint on the solution that the customer (the division in this case) doesn’t have or care about is an implied requirement and likely a competitive disadvantage when comparing internal IT costs with external costs. In broad strokes, the difference in requirements is that a division’s requirements are almost entirely about outcomes, not methods: They care about the results their users get, not how they are achieved. IT organizations often focus their requirements on how results are achieved (using this technology, in that enterprise architecture, developed with our RUP-based approved process, tracked by our PMO) and they defer to the division the functional requirements.
Local Maxima and Minima
When each division or cost center is free to chose what services they are willing to pay for, they will converge over time on only those services that are good for them. Establishing shared services is generally challenging because each party will want to ensure that everyone is paying their fair share. This is often tricky to define – should it be proportioned by feature usage? Capacity? This often creates a “first mover disadvantage” scenario where no part of the company wants to be the first to get a new service such as a database server or SharePoint Portal because they’ll be hit with the entire cost of it unless someone else comes along.
Secondarily, upgrading services gets challenging because no drop of rain believes it is responsible for the flood: If you want to upgrade to Exchange 2007 from Exchange 2003, one division can easily say that they don’t believe it’s necessary and thus decline the costs. If you need a larger server to house SharePoint, who is going to get the bill? A game of chicken often gets created where multiple parties all want a service, but no one wants to be the first to ask and risk subsidizing everyone else.
With each cost center pushing to only pay for those things it perceives sufficient direct value to take on, they are making decisions only based on what gives them the best cost or maximum value. This isn’t likely to align with providing the overall lowest costs for the company. For example, three separate departments could easily decide to implement their own direct attach storage for disk because none of them feels they can justify the cost of a SAN, however together it would be less expensive to construct and maintain a central SAN environment with SAN backup.
There are some straightforward exceptions to this problem where shared services are generally easy to get consensus on and cost out. Typically these are raw infrastructure services such as email or file storage where there are clear units of measure that allow for proportional billing (mailboxes and gigabytes used, for example).
An Alternative: Clients, not Customers
If the Customer/Vendor model isn’t the overall best approach for a company, what alternative model can provide the benefits without the unintended consequences? How about a term that’s between User (which has accumulated a substantially negative connotation) and Customer – Client. A quick trip to the dictionary shows that a client is any person or group that is the party for which professional services are rendered, which fits reasonably enough.
As your clients, they still are entitled to a great deal, just like customers would be. As the client of the project, they:
- Determine success & failure: Your project isn’t successful just because it follows the corporate processes or works on the corporate approved IT infrastructure; those are the constraints on how you solve problems that are immaterial to your client. Success is determined by whether you achieved the goals the client created. That may mean you need to do some extra communication to make sure your client knows that their goals were met, even if that’s not in the standard process.
- Decide if it’s worth the price: In the end, the problem may just not be worth solving. Many things can be done but the cost in time and distraction exceeds the value.
Unlike a customer, since you’re part of the same organization you can share with the client your insight into the costs and risks of the project in a way that no vendor ever could. In the end this creates the best partnership that delivers long lasting results.
A final note
If you don’t treat your users as clients, odds are very good they will eventually get themselves a buying choice. When they do, they won’t chose you. Don’t let it come to that, it isn’t ultimately in their interest, your interest, or your company’s interest.
If you’ve read more than one or two articles from Reliable Systems you probably have gotten the sense that we worry a lot about how to make things just work. It’s that quality of anything where you get what you expect and what you need every time. It can be in an experience (like a fun drive down a country road) or a product. As a company if you can do this over and over you create a brand people develop a strong emotional connection to: Apple, John Deere, Starbucks…
When you want to create a product that just works, you need to get all of the details right – from packaging through to maintenance and upkeep. It’s not one thing that’s important, it’s all the things. We are often engaged by senior management within a client when things aren’t working, and there’s conflicting opinions on why. Usually along the path technology is being blamed: Not enough, not the latest thing, not someone’s favorite thing, not working. As we dig into the situation, rarely is the technology the dominant factor: More often, it’s how the technology is being integrated with the people and processes that all have to work together.
One of the first things we have to do in these engagements is to establish the real facts on the ground: What exactly are the problems in the system, who’s doing what with it, how many times. It comes down to establishing metrics to make sure time and attention are paid to the parts that make the biggest difference in the outcome. Armed with these facts in a form the business can consume it’s possible to create plans of action that deliver virtually regardless of budget.
So let’s make this easier
The biggest trick is then getting the facts you need on an ongoing basis, easily, and in a form that the business can consume. For over a decade we’ve been building instrumentation right into the systems we’ve worked on. We’ve created a variety of toolkits to make this easier over the years, refining them as technology and our experience has changed.
About 18 months ago we decided it was time to really invest down this path. We believe in routinely capturing key computer metrics along with whatever logging the application can do on its own. We won’t do a project without using a great logging system that includes a strategy for managing runtime exceptions. Now that we’re collecting all this data, we need to have a way of managing the raw data and turning it into valuable business data.
The challenge is that businesses don’t get up in the morning and say “what our customers want us to do is have great internal tools”, so you’re nearly always doing this on the cheap: Borrowing time from development projects internally to cobble together various free or cheap solutions. Frankly, we got tired of having to create new solutions with each client out of the margins of each project. So, we pooled our best thinking from all of the work we’ve done (including a previous product that we did license to our clients over the past decade called CLAS) and started creating Gibraltar.
Rock Solid from Initial Release
With Gibraltar we wanted much more than a log system. Of course, it had to be a log system too – and a really easy to use one that could work with each of our client applications. More than that, it had to:
- Automatically capture all of the performance metrics we wanted.
- Integrate with existing logging available on the platform, including whatever a client might already be doing (like custom in-house options)
- Be absolutely, positively, for sure safe to run in production no matter what. That means it can’t ever use too much disk space or disk throughput or block the application.
- Not use more than 5% of the performance of the app
- Include all of the tools necessary to get data from where it was collected to the people that could get value out of it
- Include the ability to look at the detailed session data up to high level analysis: What’s the error rate? What’s it correlate to? Are we doing better or worse in this version?
From this initial sketch into everything we wanted, we’ve spent 18 months including four beta periods (from 2-4 months each) to refine the vision with real customers and real scenarios. It was essential to us that this not be just a tool for techies but be ready for use by people with a wide range of skills. It had to be pretty and just do what you wanted, when you wanted it to.
We’ve added a lot of capabilities along the way: It can generate print-ready reports about application reliability that can communicate with senior management, you can define all kinds of custom metrics to easily track how your application is used and by whom. We ran a number of betas to be sure that we had hit every goal we have above. We’re happy to report that Gibraltar is in use within large deployments of custom applications, commercial applications, and small deployments right down to our corporate web site.
This tool isn’t for everyone – Our clients are nearly all Windows shops, and if they do any custom development it’s almost invariably in .NET, so that’s what we’ve targeted. But, if you’re interested in easily getting real data on not just infrastructure (how well the application is running) but whether or not it just works, have we got an easy path for you. You can see a quick demo video of how it works technically at Gibraltar Software.
You also don’t have to take my word for it at all, you can hear what one of our beta users did with it, which is really a more compelling story than what we might say.
I think you’ll find that our work sweating a lot of little details, from the exact design of the API and making sure the documentation was complete to rewriting our own licensing system to be very IT Admin friendly. If we didn’t get a detail right, we want to know. And the great news is that we’ve just begun: We’re obsessed with the little things, and you can bet we’ll keep listening and watching to make it better. Of course, this is made a lot easier because we’re using Gibraltar to monitor itself, and a select group of our users is sending that information back to us so we can make sure it just works in the field for real people.
It’s easy to start your journey
If you do development for Microsoft .NET, I’d encourage you to go over and download our commercial release of Gibraltar. You’ll get great documentation, a free agent you can use like a flight recorder “black box” in every application you create, and a trial for a tool that will make you seem wise beyond your years. And if you pay us the ultimate honor and purchase a permanent license, I can assure you that you won’t find anyone more committed to your satisfaction than we are.
It is very tempting to be one of the herd of gazelles in technology. Every time there’s a sense of a shift in the wind, everyone starts to run in a new direction. For the past year I’ve been reading about how it’s all going to be laptop computers from here on out. In fact, not even full fledged laptops, but netbooks – computers with small screens and small keyboard who’s main distinguishing characteristic is that they’re less of a computer than anything else around.
If all this sounds a little off kilter from reality, perhaps a few hard numbers would help:
Quoting Computer World, who asked “Do Business Desktop PCs have a future?”:
While desktop PCs account for the bulk of personal computers sold to enterprises, the gap in laptop sales to enterprises is closing. Of 168 million PCs sold worldwide to professional organizations in 2008, about 95 million were desktops and 73 million were laptops. That’s compared to 94.6 million desktops and 47.3 million laptops that shipped in 2006.
Now, as with any statistics there’s two ways to look at these numbers:
- Laptops have grown tremendously in their total percentage of the market, and that growth rate has them on track to take over the world.
- The majority of the growth in computer sales is coming in the form of laptops.
The gazelles are taking the first road. And why not? People love to assume the disruptive is true, it’s a lot more interesting. Before you charge down that road, consider what seems likely. There are a few problems with the first conclusion:
- Two data points don’t make a pattern: If you follow the trend back farther, the sales of PC desktops has held up consistently, but laptop sales go up and down. This would seem to indicate that the most likely interpretations of the data are that either the overall market is expanding (for example by people having two systems) or that this is a momentary, periodic surge in laptop purchases.
- Past large growth rarely projects forward: Just because there was a large growth in one year (either in absolute or percentage turns) doesn’t mean it will repeat at all. It’s just as likely that the next year pattern will be flat or even retreat.
So before you see the first twitch and assume it signals a migration of the whole herd, step back and think through the underlying facts. Is this really the first sign of a monumental shift? Or just another twitch of the needle? Then look at your own situation.
Now, we have a few laptops, but we have more hard core desktops – the laptops are used for on the road presentations or working at Starbucks for fun. Of course, we’re developers so we’re in the category of users that are always excluded from the norm. But what’s not to love about a desktop? For the same money they will always be faster and more capable than a laptop because they don’t have the burden of being small or extra power efficient. Even if you buy into the idea that everything will be run through the web so computers are just glorified terminals… Something still has to compose all of those web pages and make it all come together, and web apps can burn a surprising amount of processor and RAM locally.
In the end, I think we’re seeing a lot of folks buying second computers or getting additional laptops for other uses that complement their primary work computer experience. Additionally, there are folks in emerging markets that need what laptops offer (self-contained, reliable power) more than performance but this reflects an increase in the overall market, not a shift in the existing market.