It’s encouraging that many of the conversations we are having at the moment in relation to IT and sustainability are moving beyond power management in the data centre. It is not that optimising the use of central IT isn’t important, but it really is only one way to drive an organisation’s environmental agenda. And even before we get to main question of how technology can enable more eco-friendly working practices, there is another place we can look to for operational IT power savings – the desktop.
When looking in this direction, though, I have noticed that there is a tendency to apply the same kind of thinking that is used on the server side of the equation. Fair enough, accelerating hardware refresh to introduce more power efficient kit into the equation reflects a similar game to that being played in the data centre, but with the carbon cost of manufacture/disposal taken into account, the net gains are hard to establish. In the data centre of course, hardware modernisation is augmented by consolidation and virtualisation to drive up average server utilisation and thus improve energy efficiency.
Virtualisation is a different game on the desktop, however. Sure, some will go down the route of running virtual PCs on the server and accessing them through thin client configurations, but it will be a long time before this is the norm. The reality is that most organisations will remain wed to their fat clients for the foreseeable future, so we need to think of the energy question a bit differently. Essentially, the challenge boils down to optimising the power consumption of desktop machines that typically idle for the majority of time they are switched on.
In order to deal with this problem, we need to think less about utilisation and inherent power efficiency of hardware and software, and more about controlling the state of machines in terms of their sleep/wake cycle. In practice, a configuration exhibiting a high degree of runtime energy efficiency, but has no active policy to transition to a low power state when idle will consume considerably more power than a less efficient machine whose state is properly managed.
This something that Microsoft makes a big point of when talking about Vista in the green context, and indeed early adopters with large Vista estates corroborate Microsoft’s claims that Vista’s enhanced manageability translates directly to power savings. The problem is, however, that Windows XP isn’t going away in a hurry, so what about all of those organisations who are interested in desktop power management but will be maintaining older versions of the operating system for some time to come?
Well the one approach that is generally acknowledged not to work that well is to educate, encourage or threaten users in an attempt to get them to keep their power configuration set in accordance with environmental policy, and/or to manually shut down their PCs or put them to sleep when they are not in use. IT managers relying on this kind of user discipline are probably not going to see the results they were hoping for unless they’re working for a totally green-tinted organisation.
Fortunately, third party solutions exist that can help to enable/enforce centralised power management – a couple of examples being Verdiem and 1E. Using such technology, you can not only cure PC insomnia from a policy enforcement perspective, but also allow real-time remote control of power state so machines can be woken up for backup or software distribution purposes then put to sleep again afterwards. So, if you are serious about saving energy across a large XP estate, the options are there.
Something I haven’t had time to look into is whether similar solutions exist for alternative desktops – namely Mac OS X and Linux. Apple kit is certainly not renowned for its enterprise management friendliness, but perhaps ‘right on’ Mac users aren’t so much of a problem as they are of course more environmentally aware. As for Linux, I would be interested in any views, recommendations or experiences.
Meanwhile, it would be great to see a bit more awareness raising from Microsoft on the availability of solutions to centrally manage power consumption by Windows XP, rather than automatically seguéing from this discussion into a Vista upgrade pitch.
Tuesday, 15 July 2008
Friday, 27 June 2008
Justifying a large scale Vista migration
Over the past couple of months, I have had in-depth conversations with five CIOs that have made a significant commitment to Windows Vista.
One of the main issues I explored with each of them was the foundation upon which the business case for migration was made. The responses I received were remarkably consistent, and not completely in tune with the way Microsoft articulates the Vista proposition.
What all these guys said was that their business case for Vista, i.e. the one put before the board, CFO and/or other significant stakeholders, was founded on benefits in two key areas - security risk management and operational cost control.
From a security perspective, the focus tended to be on three specific attributes of Vista - better run-time security in the operating system itself, more effective policy enforcement, and the ability to encrypt data on notebook PCs through BitLocker.
What I found interesting was the view that while all three of these security related benefits were considered to be significant, it was the last one in particular that was most frequently highlighted as resonating directly with business stakeholders. Recent high profile press coverage about notebooks storing sensitive data being lost or stolen was seen to have an influence here in terms of awareness. Against this background, Vista’s ability to deal with an acknowledged business risk straight out-of-the-box was perceived to be of significant value.
Beyond security, double-digit reductions in operational cost generally formed the substance of the business case in financial terms. The general streamlining of the management and maintenance process was highlighted as part of this, and the dramatic simplification of image management in particular was seen as a significant contributor to the savings in the large multi-national environment.
Something I was personally very sceptical about, but which three of the five CIOs defended very strongly, were the savings in relation to desktop power consumption. Numbers from 50 Euros per year per desktop upwards were cited as savings, though to be absolutely clear, the benefit comes from better centralised control and enforcement of power management policies rather than efficiencies in the way Vista uses hardware resources.
When asked about the element that was clearly missing from these business cases, namely improved user productivity, the general consensus was that this was a red herring. The most positive view was that there is likely to be some impact in this area, but it is impossible to measure in any tangible way, so why would you dilute an otherwise solid business case with something that could easily discredit it? Best to stick the list of intangibles in your bottom drawer and run with what you can defend with confidence.
And it is on this point that the CIOs I have been speaking with diverge from the view articulated by Microsoft. In fact one said the obsessive reference to the great user interface, user facing productivity features, etc caused a lot of distraction and confusion when he invited a Microsoft executive to meet some of his business sponsors. When a stakeholder says, “I don’t understand, I thought we were doing this to save money”, it doesn’t actually help to get the investment case signed off.
There are a couple of lessons that fall out of this. Firstly, if you are going through the process of evaluating the business case for Vista yourself, the abovementioned criteria will hopefully provide some thoughts based on where at least a few others have put the emphasis – particularly in a large corporate or public sector environment.
Secondly, the feedback suggests that you should be prepared for business sponsors to get confused about the rationale for migrating based on the messages broadcast by Microsoft both directly and indirectly through advertising, the media, marketing collateral, etc. The trick here is agreeing that it will be a great spin-off benefit if all of the claimed or suspected end user productivity gains are realised, but keep the investment case itself focused on the more solid stuff that can be defended under cross-examination.
Finally, there is a message in here for any Microsoft executives reading this. If you can curb your enthusiasm for obsessing about the Wow! and focus on the things that drive decisions, you might see more movement in the market.
One of the main issues I explored with each of them was the foundation upon which the business case for migration was made. The responses I received were remarkably consistent, and not completely in tune with the way Microsoft articulates the Vista proposition.
What all these guys said was that their business case for Vista, i.e. the one put before the board, CFO and/or other significant stakeholders, was founded on benefits in two key areas - security risk management and operational cost control.
From a security perspective, the focus tended to be on three specific attributes of Vista - better run-time security in the operating system itself, more effective policy enforcement, and the ability to encrypt data on notebook PCs through BitLocker.
What I found interesting was the view that while all three of these security related benefits were considered to be significant, it was the last one in particular that was most frequently highlighted as resonating directly with business stakeholders. Recent high profile press coverage about notebooks storing sensitive data being lost or stolen was seen to have an influence here in terms of awareness. Against this background, Vista’s ability to deal with an acknowledged business risk straight out-of-the-box was perceived to be of significant value.
Beyond security, double-digit reductions in operational cost generally formed the substance of the business case in financial terms. The general streamlining of the management and maintenance process was highlighted as part of this, and the dramatic simplification of image management in particular was seen as a significant contributor to the savings in the large multi-national environment.
Something I was personally very sceptical about, but which three of the five CIOs defended very strongly, were the savings in relation to desktop power consumption. Numbers from 50 Euros per year per desktop upwards were cited as savings, though to be absolutely clear, the benefit comes from better centralised control and enforcement of power management policies rather than efficiencies in the way Vista uses hardware resources.
When asked about the element that was clearly missing from these business cases, namely improved user productivity, the general consensus was that this was a red herring. The most positive view was that there is likely to be some impact in this area, but it is impossible to measure in any tangible way, so why would you dilute an otherwise solid business case with something that could easily discredit it? Best to stick the list of intangibles in your bottom drawer and run with what you can defend with confidence.
And it is on this point that the CIOs I have been speaking with diverge from the view articulated by Microsoft. In fact one said the obsessive reference to the great user interface, user facing productivity features, etc caused a lot of distraction and confusion when he invited a Microsoft executive to meet some of his business sponsors. When a stakeholder says, “I don’t understand, I thought we were doing this to save money”, it doesn’t actually help to get the investment case signed off.
There are a couple of lessons that fall out of this. Firstly, if you are going through the process of evaluating the business case for Vista yourself, the abovementioned criteria will hopefully provide some thoughts based on where at least a few others have put the emphasis – particularly in a large corporate or public sector environment.
Secondly, the feedback suggests that you should be prepared for business sponsors to get confused about the rationale for migrating based on the messages broadcast by Microsoft both directly and indirectly through advertising, the media, marketing collateral, etc. The trick here is agreeing that it will be a great spin-off benefit if all of the claimed or suspected end user productivity gains are realised, but keep the investment case itself focused on the more solid stuff that can be defended under cross-examination.
Finally, there is a message in here for any Microsoft executives reading this. If you can curb your enthusiasm for obsessing about the Wow! and focus on the things that drive decisions, you might see more movement in the market.
Thursday, 12 June 2008
Business Intelligence and the bolting horse
There appears to be a revival of interest in Business Intelligence (BI) among IT vendors at the moment. Some pretty big guns, the likes of Oracle, IBM, SAP and Microsoft, are trying to position themselves more aggressively in this space following the spate of acquisitions.
So is this renewed vigour justified?
Well from a customer perspective it undoubtedly is. It is pretty clear when you research BI that the gap between business need and IT capability is as great as ever. When we interviewed a bunch of senior business managers from City of London financial institutions last year, for example, they were very clear about this gap:

And if you look at this chart closely, you will notice something quite interesting. While business information availability isn't that bad at an overall financial and arguably operational performance level, it is not very good when you look at more detailed measures and indicators.
Why is this interesting?
Well because it tells us that by the time those managing the business find out about something important, it is often too late to do anything about it. Stories of product, client or partner related issues only coming to light when someone starts investigating why a higher level number has been missed are quite common.
To put it another way, business managers usually have what they need to monitor the ‘effects’ of doing business, but are typically underserved when it comes to the information required to manage the underlying ‘causes’ of those effects. We discuss this more in the research report from the study if you are interested, but it does bring home the importance of incorporating continuous analytics capability into the business process itself, as well as having traditional retrospective BI operating off to one side.
The aforementioned vendors are therefore spot-on when it comes to making a big noise about the principle of integrating BI capability into applications in a more embedded fashion. Now, whether they have done a good of integrating their recent acquisitions into their broader solution set in practice is another question, but it is at least worth hearing them out.
So is this renewed vigour justified?
Well from a customer perspective it undoubtedly is. It is pretty clear when you research BI that the gap between business need and IT capability is as great as ever. When we interviewed a bunch of senior business managers from City of London financial institutions last year, for example, they were very clear about this gap:

And if you look at this chart closely, you will notice something quite interesting. While business information availability isn't that bad at an overall financial and arguably operational performance level, it is not very good when you look at more detailed measures and indicators.
Why is this interesting?
Well because it tells us that by the time those managing the business find out about something important, it is often too late to do anything about it. Stories of product, client or partner related issues only coming to light when someone starts investigating why a higher level number has been missed are quite common.
To put it another way, business managers usually have what they need to monitor the ‘effects’ of doing business, but are typically underserved when it comes to the information required to manage the underlying ‘causes’ of those effects. We discuss this more in the research report from the study if you are interested, but it does bring home the importance of incorporating continuous analytics capability into the business process itself, as well as having traditional retrospective BI operating off to one side.
The aforementioned vendors are therefore spot-on when it comes to making a big noise about the principle of integrating BI capability into applications in a more embedded fashion. Now, whether they have done a good of integrating their recent acquisitions into their broader solution set in practice is another question, but it is at least worth hearing them out.
Sunday, 1 June 2008
Talking at cross purposes, or being deliberately misled?
Ever had one of those conversations where you debate something for a while then it dawns on you that each party has been talking about something different? It has happened to me quite a few times recently.
One example was in relation to Business Process Modelling (BPM), which is something I grew up with and in my mind is about, well, modelling business processes. It’s a discipline that business analysts have been involved with for a years, and while the technology to support it has moved on, and arguably some of the methodologies too, the fundamental principles haven’t changed that much for a long time now. Then someone asked Freeform Dynamics to design a research study to figure out the level to which organisations had adopted BPM. When I argued during an internal project start-up meeting that you couldn’t really ask someone about when and how they were taking something on board that they had been doing for a decade or two, it turned out that the ‘BPM’ we were being asked to investigate was actually 'Business Process Management' and was based on a definition which included the technical side of things – workflow rules engines, SOA orchestration, and so on. Not quite the technology-independent business view of BPM that I was taught earlier in my career, but as soon as the misunderstanding was cleared up, we could design the research accordingly.
Another example was prompted by a report I read the other day claiming that Software as a Service (SaaS) is now a mature and pervasive model. This was reminiscent of claims made during a number of other conversations I have had recently with SaaS advocates, that I have been struggling to reconcile with the findings of our own research. The latter has shown quite conclusively that while larger organisations are starting to make selective use of SaaS for delivering business application functionality, 'pervasive' is certainly not a word that applies in this area. Then I realised that some of the advocates were throwing a whole bunch of stuff into their definition of SaaS (or the related S+S model) that I would never dream of including when discussing the delivery of business application functionality. Internet search, traditional ISP services, and even things like consumer content services, online help and automatic updates associated with desktop software can sometimes be lumped together when referring the 'SaaS market'. Again, once the ambiguity is cleared up, you can see where people are coming from, and make a judgement on the usefulness (or otherwise) of what they are saying.
I guess we at Freeform are particularly sensitive to precision when it comes to discussing market activity, as primary research designed to figure out what’s really going on behind the buzzwords and the hype is so central to what we do. The experiences I have outlined, however, highlight how easily people can be misled by imprecise or ambiguous definitions if they are not on their guard. And with so much vested interest and evangelism driving the market, the temptation for some to spin and exploit our ever changing vocabulary is significant, so we all need to careful about what is behind those stats and definitions.
One example was in relation to Business Process Modelling (BPM), which is something I grew up with and in my mind is about, well, modelling business processes. It’s a discipline that business analysts have been involved with for a years, and while the technology to support it has moved on, and arguably some of the methodologies too, the fundamental principles haven’t changed that much for a long time now. Then someone asked Freeform Dynamics to design a research study to figure out the level to which organisations had adopted BPM. When I argued during an internal project start-up meeting that you couldn’t really ask someone about when and how they were taking something on board that they had been doing for a decade or two, it turned out that the ‘BPM’ we were being asked to investigate was actually 'Business Process Management' and was based on a definition which included the technical side of things – workflow rules engines, SOA orchestration, and so on. Not quite the technology-independent business view of BPM that I was taught earlier in my career, but as soon as the misunderstanding was cleared up, we could design the research accordingly.
Another example was prompted by a report I read the other day claiming that Software as a Service (SaaS) is now a mature and pervasive model. This was reminiscent of claims made during a number of other conversations I have had recently with SaaS advocates, that I have been struggling to reconcile with the findings of our own research. The latter has shown quite conclusively that while larger organisations are starting to make selective use of SaaS for delivering business application functionality, 'pervasive' is certainly not a word that applies in this area. Then I realised that some of the advocates were throwing a whole bunch of stuff into their definition of SaaS (or the related S+S model) that I would never dream of including when discussing the delivery of business application functionality. Internet search, traditional ISP services, and even things like consumer content services, online help and automatic updates associated with desktop software can sometimes be lumped together when referring the 'SaaS market'. Again, once the ambiguity is cleared up, you can see where people are coming from, and make a judgement on the usefulness (or otherwise) of what they are saying.
I guess we at Freeform are particularly sensitive to precision when it comes to discussing market activity, as primary research designed to figure out what’s really going on behind the buzzwords and the hype is so central to what we do. The experiences I have outlined, however, highlight how easily people can be misled by imprecise or ambiguous definitions if they are not on their guard. And with so much vested interest and evangelism driving the market, the temptation for some to spin and exploit our ever changing vocabulary is significant, so we all need to careful about what is behind those stats and definitions.
Sunday, 27 April 2008
Cloud Computing and Web 2.0
Don’t you just hate it when another woolly ambiguous term is forced upon us? When I was approached by yet another journalist the other day asking me my thoughts on the impact of cloud computing, I simply sighed and told them it is a bit like Web 2.0. In itself, it is difficult to pin down exactly what is meant by it. The best you can do is say that both of these terms refer to a general direction in which the industry appears to be moving.
In the case of Web 2.0, it is about the Web becoming a generally more interactive medium. This can manifest itself at a technology level through everything from Ajax through mash-ups to SOA, and at a behavioural level through social media and the simple fact that websites are generally now more geared up to a two-way dialogue than they used to be.
In the case of cloud computing, it is about the evolution of dynamic virtualised infrastructure that allows us to think more in terms of resource pools than individual IT components. This in turn opens the door to delivering computing resource on a utility basis, which is equally applicable both internally (i.e. with regard to the way you use your data centre) and externally – which takes you into the realm of utility computing and software as a service.
The point about both Web 2.0 and cloud computing is that they both sprung up arbitrarily on the evolutionary timeline, and seeming embraced anything and everything that could be thrown into the mix. While the very specific phenomenon of social networking is certainly noteworthy, this bears little relationship to evolution of rich user interfaces and composite applications, in fact many social networking sites have appalling UIs by traditional standards. Yet Web 2.0 can mean either of these things, and, confusingly, lots of other concepts too.
Similarly, we have been talking about virtualisation ultimately leading to computing grids and utility computing for years, and giving it a new name doesn’t actually change anything in terms of the underlying trend. In fact, you knew where you stood much better when you could talk about virtualisation and grid technology as the enabling stuff, and utility computing and application services as what it enables. As everyone jumps onto the cloud computing bandwagon, it all gets mixed up and confused, just like Web 2.0.
So, if you are one of those people wondering what cloud computing is really all about after listening the IBM explanation, the Microsoft one, and the evangelical rhetoric we have heard recently from the Google and Salesforce.com camp, don’t worry, you are not alone. The trick is to think of it as a label for a trend at one level, and an industry bandwagon at another, and keep your expectations pretty low in terms of clarity and consistency for the time being. Don’t however, dismiss the underlying trend it itself. While we are not looking at a revolution here, some of the developments in this general area are really quite interesting and valuable – though, you probably knew that already, even before the marketing hype was thrust upon us.
In the case of Web 2.0, it is about the Web becoming a generally more interactive medium. This can manifest itself at a technology level through everything from Ajax through mash-ups to SOA, and at a behavioural level through social media and the simple fact that websites are generally now more geared up to a two-way dialogue than they used to be.
In the case of cloud computing, it is about the evolution of dynamic virtualised infrastructure that allows us to think more in terms of resource pools than individual IT components. This in turn opens the door to delivering computing resource on a utility basis, which is equally applicable both internally (i.e. with regard to the way you use your data centre) and externally – which takes you into the realm of utility computing and software as a service.
The point about both Web 2.0 and cloud computing is that they both sprung up arbitrarily on the evolutionary timeline, and seeming embraced anything and everything that could be thrown into the mix. While the very specific phenomenon of social networking is certainly noteworthy, this bears little relationship to evolution of rich user interfaces and composite applications, in fact many social networking sites have appalling UIs by traditional standards. Yet Web 2.0 can mean either of these things, and, confusingly, lots of other concepts too.
Similarly, we have been talking about virtualisation ultimately leading to computing grids and utility computing for years, and giving it a new name doesn’t actually change anything in terms of the underlying trend. In fact, you knew where you stood much better when you could talk about virtualisation and grid technology as the enabling stuff, and utility computing and application services as what it enables. As everyone jumps onto the cloud computing bandwagon, it all gets mixed up and confused, just like Web 2.0.
So, if you are one of those people wondering what cloud computing is really all about after listening the IBM explanation, the Microsoft one, and the evangelical rhetoric we have heard recently from the Google and Salesforce.com camp, don’t worry, you are not alone. The trick is to think of it as a label for a trend at one level, and an industry bandwagon at another, and keep your expectations pretty low in terms of clarity and consistency for the time being. Don’t however, dismiss the underlying trend it itself. While we are not looking at a revolution here, some of the developments in this general area are really quite interesting and valuable – though, you probably knew that already, even before the marketing hype was thrust upon us.
Monday, 14 April 2008
Oracle and Collaboration
I was interested to read about Angela’s experience trying to secure a briefing from Oracle on its collaboration related offerings and activities. As Angela pointed out, the ‘Big O’ was the only large vendor that ‘should’ have a story in this space that declined to tell her what it was up to.
When I later commented on this (with a link to the above) via Twitter, someone else came back to me to say that they too had been having trouble getting Oracle to open up in this area.
I have to say that this doesn’t surprise me. It must be quite challenging for Oracle at the moment trying to figure out how to position in this space. The Oracle Collaboration Suite was launched a few years ago supposedly to save the world from flaky Microsoft Exchange installations and pretty much fell flat. Oracle believed its own rhetoric about the world hating Microsoft, so looked silly to most people when it aggressively launched an initiative that would only work if customers ditched their existing Microsoft messaging infrastructure, which was never going to happen.
In addition to some of the things Angela mentioned, we have also seen the portal wars in which Oracle has consistently been on the back foot, and lately, the march of Microsoft SharePoint and a range of collaboration and unified communications offerings from IBM under the Lotus and WebSphere brands that are largely messaging system agnostic.
Then most recently, we have seen the BEA collaboration offerings thrown into the mix, which before the acquisition, were beginning to look pretty good. BEA had a very sound grasp of the heterogeneous world in which customers live and was taking a very mature view of social media in the enterprise, for example. And, of course, it wasn’t encumbered by competitive obsession, which, as an aside, is arguably one of the biggest obstacles to Oracle being accepted as a truly strategic partner in many major accounts. Telling CIOs and business executives that they have been stupid over the years to waste their money on SAP, Microsoft and IBM, for example, is not the best way to win friends in high places. While competition is good, destructive messaging generally only appeals to junior level activists. It is a huge turn-off in senior management circles.
Coming back to the original question, we should probably continue to expect Oracle to be tight-lipped on not just collaboration, but middleware strategy in general for a little while yet. I have personally been told on a couple of occasions to refer to the ‘official line on oracle.com' when looking for clarity on open questions that we hear from Oracle’s customers (old or newly acquired). Irritating though this might be, and frustrating though it is to be fobbed off with ‘Mom and Apple Pie’ type feel-good policy statements, the truth is that there is little else Oracle can do until it gets its act together properly.
And to be fair, given some of the confusion than came about as a result of articulating nice sounding stories around work-in-progress plans associated its CRM and ERP acquisitions in the past (that later had to be ‘adjusted’), it is probably better for us to hang on until Oracle really has worked out what it is trying to do in collaboration as it has in the enterprise application space.
Oracle is undoubtedly already aware that needs to be careful that the collaboration and closely related unified communications markets do not slip away from it, and will be doing what it can to make sure it doesn't get left behind again. In the meantime, it goes without saying that customers should challenge the company hard before making major commitments to it in these areas.
When I later commented on this (with a link to the above) via Twitter, someone else came back to me to say that they too had been having trouble getting Oracle to open up in this area.
I have to say that this doesn’t surprise me. It must be quite challenging for Oracle at the moment trying to figure out how to position in this space. The Oracle Collaboration Suite was launched a few years ago supposedly to save the world from flaky Microsoft Exchange installations and pretty much fell flat. Oracle believed its own rhetoric about the world hating Microsoft, so looked silly to most people when it aggressively launched an initiative that would only work if customers ditched their existing Microsoft messaging infrastructure, which was never going to happen.
In addition to some of the things Angela mentioned, we have also seen the portal wars in which Oracle has consistently been on the back foot, and lately, the march of Microsoft SharePoint and a range of collaboration and unified communications offerings from IBM under the Lotus and WebSphere brands that are largely messaging system agnostic.
Then most recently, we have seen the BEA collaboration offerings thrown into the mix, which before the acquisition, were beginning to look pretty good. BEA had a very sound grasp of the heterogeneous world in which customers live and was taking a very mature view of social media in the enterprise, for example. And, of course, it wasn’t encumbered by competitive obsession, which, as an aside, is arguably one of the biggest obstacles to Oracle being accepted as a truly strategic partner in many major accounts. Telling CIOs and business executives that they have been stupid over the years to waste their money on SAP, Microsoft and IBM, for example, is not the best way to win friends in high places. While competition is good, destructive messaging generally only appeals to junior level activists. It is a huge turn-off in senior management circles.
Coming back to the original question, we should probably continue to expect Oracle to be tight-lipped on not just collaboration, but middleware strategy in general for a little while yet. I have personally been told on a couple of occasions to refer to the ‘official line on oracle.com' when looking for clarity on open questions that we hear from Oracle’s customers (old or newly acquired). Irritating though this might be, and frustrating though it is to be fobbed off with ‘Mom and Apple Pie’ type feel-good policy statements, the truth is that there is little else Oracle can do until it gets its act together properly.
And to be fair, given some of the confusion than came about as a result of articulating nice sounding stories around work-in-progress plans associated its CRM and ERP acquisitions in the past (that later had to be ‘adjusted’), it is probably better for us to hang on until Oracle really has worked out what it is trying to do in collaboration as it has in the enterprise application space.
Oracle is undoubtedly already aware that needs to be careful that the collaboration and closely related unified communications markets do not slip away from it, and will be doing what it can to make sure it doesn't get left behind again. In the meantime, it goes without saying that customers should challenge the company hard before making major commitments to it in these areas.
Friday, 28 March 2008
Making chipsets interesting
At the risk of offending all those who love to talk for hours about cores, caches and clock speeds, I have to say that I personally find discussions about the innards of silicon chips and how they are wired together intensely boring. In fact, I’ve probably already used all the wrong words and phrases, even in that first sentence, which is no doubt going to annoy some people further.
So, when Tony, Martin and I were invited to a dinner to meet with some of AMD’s European executives, I was understandably in two minds about attending, especially as I am also not really into all this wining and dining stuff as some other analyst are.
I went along, though, and I’m glad I did. Sure, I found myself sucked into the odd eye glazing conversation that I only partially understood, but something that came across clearly was that AMD is investing quite a bit in ‘reaching through’ relationships with its direct customers (largely the OEMs) to the ultimate customers – Enterprises, SMBs and consumers.
Of course there is nothing new or unique in this, in fact I ran a team at Nortel Networks back in the early 00’s which did exactly the same thing (in that case, reaching through the mobile operators to understand how 3G related to their subscribers). The basic idea is that you can gain insights and tune your R&D based on direct end user/buyer input that would not be possible if you worked second hand through your customer as an intermediary. To do this well, however, you really need people who understand that end user environment and the trends that are taking place within it, and that’s not necessarily the same people that deal with your core product design from an internal perspective.
Anyway, this end-user oriented view of the world shifted discussions to more familiar territory for me during the dinner, and I enjoyed hearing people like Giuseppe Amato, who goes under the title “Director, Value Proposition Team”, explaining how the whole process works in relation to data centre evolution, high performance computing and mobile working. It changed my perception of AMD quite a bit from simply “the alternative to Intel” to that of an independent player that is committed to driving industry development in its own way.
While I am not qualified to comment on the relative merits of AMD technology versus the competition, nor its ability to execute in the cut throat world of OEM deals and supply chains, I now have a much better appreciation of why what AMD does actually matters. It is not just about price/performance or performance per watt of energy consumed, it is about shifting thresholds to make things economically or practically possible in the mainstream market that previously were not. That’s why the “what if you could....?” conversations with end customers as suppliers like AMD reach through to them are so important. And also why, for the first time in my life, I actually had some genuinely interesting conversations about silicon that were directly relevant to the world in which I live.
So, when Tony, Martin and I were invited to a dinner to meet with some of AMD’s European executives, I was understandably in two minds about attending, especially as I am also not really into all this wining and dining stuff as some other analyst are.
I went along, though, and I’m glad I did. Sure, I found myself sucked into the odd eye glazing conversation that I only partially understood, but something that came across clearly was that AMD is investing quite a bit in ‘reaching through’ relationships with its direct customers (largely the OEMs) to the ultimate customers – Enterprises, SMBs and consumers.
Of course there is nothing new or unique in this, in fact I ran a team at Nortel Networks back in the early 00’s which did exactly the same thing (in that case, reaching through the mobile operators to understand how 3G related to their subscribers). The basic idea is that you can gain insights and tune your R&D based on direct end user/buyer input that would not be possible if you worked second hand through your customer as an intermediary. To do this well, however, you really need people who understand that end user environment and the trends that are taking place within it, and that’s not necessarily the same people that deal with your core product design from an internal perspective.
Anyway, this end-user oriented view of the world shifted discussions to more familiar territory for me during the dinner, and I enjoyed hearing people like Giuseppe Amato, who goes under the title “Director, Value Proposition Team”, explaining how the whole process works in relation to data centre evolution, high performance computing and mobile working. It changed my perception of AMD quite a bit from simply “the alternative to Intel” to that of an independent player that is committed to driving industry development in its own way.
While I am not qualified to comment on the relative merits of AMD technology versus the competition, nor its ability to execute in the cut throat world of OEM deals and supply chains, I now have a much better appreciation of why what AMD does actually matters. It is not just about price/performance or performance per watt of energy consumed, it is about shifting thresholds to make things economically or practically possible in the mainstream market that previously were not. That’s why the “what if you could....?” conversations with end customers as suppliers like AMD reach through to them are so important. And also why, for the first time in my life, I actually had some genuinely interesting conversations about silicon that were directly relevant to the world in which I live.
Subscribe to:
Posts (Atom)