For the tl:dr crowd… Google’s algorithms are constantly changing; and no matter the topic work in a least one mention of cats. LOL.
For the tl:dr crowd… Google’s algorithms are constantly changing; and no matter the topic work in a least one mention of cats. LOL.
When evaluating tools like JIRA, HP ALM, or IBM Rational, it’s important evaluate project needs vs product capabilities. Obviously the costs of getting started with JIRA are much lower than some alternatives. But sometimes, being penny-wise can result in being pound-foolish.
For a simple “MVC” type application with a limited set of components, it’s likely JIRA’s features will be adequate. Or project needs can be met with some minor customizations and/or plugins.
However, when managing ongoing development of systems which contain many levels of hierarchical components, the JIRA limitations may present significant obstacles. For many years, there have been open feature requests regarding support for hierarchies. As of March 4, 2014, JIRA’s response is that it will be another 12 months before they “fit this into their roadmap”.
Jira JRA-846 Support for subcomponents
For large distributed systems, with complex dependencies, this presents a significant challenge.
While setting up a new JIRA/Atlassian environment for a solution comprised of 8 major applications, I’ve found that it is not possible to create a hierarchy of subcomponents. Nor is it possible to establish versioning for those subcomponents. Instead, the JIRA data model and workflows are designed for all components of a project to exist as a flat list. And for all components to be on the same version / release cycle.
For our solution, many of the major applications start with a commercial product, incorporate multiple modules, integrate an SDK, integrate 3rd Party plugins, and finish with custom coding of multiple subcomponents. The design pattern is to establish interface boundaries, decouple the components, and enable components to be updated independently (some people call this SOA).
Now I’m am getting a clearer picture of when it is time to consider alternatives such as HP ALM or IBM Rational. In the past, I’ve encountered several very successful JIRA implementations. And I’ve encountered a number of failures.
Comparing my current experience of setting up a new “systems development” project in JIRA with those past experiences, now I understand the tipping point was a matter of component complexity. JIRA’s architecture needs to be changed such that components can be containers for other objects, and can be versioned independently. There are elegant/simple ways to introduce a data model which supports this, it will likely require them to refactor most (if not all) of their application stack. Given their success with smaller projects, it’s easy to understand their business decision to defer these feature requests.
JIRA continues to recommended workarounds, and several 3rd party plugins attempt to address the gap. Unfortunately, each of these workarounds are dependent upon the products internal data model and workflows. JIRA themselves have discontinued development of features which support one of their suggested workarounds. And some 3rd Party plugins have stopped development, most likely due to difficulties staying in sync with internal JIRA dependencies.
It can take six months to two years to get an HP ALM or IBM Rational solution running smoothly, and there are ongoing costs of operational support and training new developers. However, there are use cases which justify those higher costs of doing business.
It’s unfortunate my current project will have to make do with creative workarounds. But it has provided me an opportunity to better understand how these tools compare, and where the boundaries are for considering one versus the other.
In todays news… Midwest gets surprise snow forecast in February.
Does anyone in the media realize how they sound when acting surprised by snow in the winter? This does occur about the same time every year.
Sub-prime lending enabled a great quantity of borrowers, lenders, and investors to participate in markets and transactions which they most often did not fully understand. In many (perhaps most) cases they did not understand the risk involved, did not understand the contracts they entered into, and were completely unprepared when risks became realities.
Similarly, public cloud computing services are enabling great quantities of new customers, service providers, and investors to make low cost entries into complex transactions with many poorly understood or entirely unknown risks.
The often low upfront costs combined with rapid activation processes for public cloud services are enticing to many cost conscious organizations. However, many of these services have complex pay as you go usage rates which can result in surprisingly high fees as the services are rolled out to more users and those services become a key component of the users regularly workflows.
Many public cloud services start out with low introductory rates which go up over time. The pricing plans rely on the same psychology as introductory cable subscriptions and adjustable rate mortgages.
Additionally, there is often an inexpensive package rate which provides modest service usage allowances. Like many current cell phone data plans, once those usage limits are reached, additional fees automatically accumulate for:
User accounts, concurrent user sessions, static IP addresses, data backups, geographic distribution or redundancy, encryption certificates and services, service monitoring, and even usage reporting are some examples of “add-ons” which providers will up sell for additional fees.
It is also common for public cloud service providers to tout a list of high profile clients. It would be a mistake to believe the provider offers the same level of service, support, and security to all of their customers. Amazon, Google, and Microsoft offer their largest customers dedicated facilities with dedicated staff who follow the customer’s approved operational and security procedures. Most customers do not have that kind of purchasing power. Although the service providers marketing may tout these sort of high profile clients, those customers may well be paying for a Private Cloud.
Private Cloud solutions are typically the current marketing terminology for situations where a customer organization outsources hardware, software, and operations to a third party and contracts the solution as an “Operational Expense” rather than making any upfront “Capital Expenditures” for procurement of assets.
* Op-Ex vs Cap-Ex is often utilized as an accounting gimmick to help a company present favorable financial statements to Wall Street. There are many ways an organization can abuse this and I’ve seen some doozies.
Two key attractions for service providers considering a public cloud offering are the Monthly Recurring Charge (MRC) and auto renewing contracts. The longer a subscriber stays with the service, the more profitable they become for the provider. Service providers can forecast lower future costs due to several factors:
All of these cost factors contribute to the service provider’s ability to develop a compelling business case to its investors.
The subprime market imploded with disastrous consequences when several market conditions changed. New construction saturated many markets and slowed or reversed price trends. Many customers found they couldn’t afford the products and left the market (often thru foreclosures which furthered the oversupply). Many other customers recognized the price increases built into their contracts (variable rate mortgages) and returned to more traditional products (by refinancing to conventional loans). And many sub-prime lenders were found to have engaged in questionable business practices (occasionally fraudulent, often just plain stupid) which eventually forced them out of the business while leaving their customers and investors to clean up the mess.
Like the housing market, public cloud computing is on course to create an oversupply. Many of these cloud providers are signing customers up for contracts and pricing models which will be invalidated in a short time (as processing, storage, and bandwidth continue to get faster and cheaper). And few, if any, of these providers understand the risk environment within which they operate.
Public cloud computing is sure to have a long future for “inherently public” services such as media distribution, entertainment, education, marketing, and social networking.
For personal and organizational computing of “inherently private” data the long value is questionable, and should be questioned.
Current public cloud services offer many customers a cost advantage for CPU processing. It also offers some customers a price advantage for data storage, but few organizations have needs for so called “big data”. The primary advantage of public cloud services to many organizations is distributed access to shared storage via cheap bandwidth.
Competing on price is always a race to the bottom. And that is a race very few ever truly win.
Public cloud service providers face significant business risks from price competition and oversupply. We saw what happened to the IT industry in the early 2000‘s and these were two key factors.
Another factor is declining customer demand. The capabilities of mobile computing and the capabilities of low cost on-site systems continues to grow rapidly. In todays pricing, it may be cheaper to host an application in the cloud than to provide enough bandwidth at the corporate office(s) for mobile workers. That is changing rapidly.
A T1 1.5MB connection used to cost a business several thousand dollars per month. Now most can get 15MB to 100MB for $79 per month. As last mile fiber connectivity continues to be deployed, we’ll see many business locations have access to 1GB connections for less than $100 per month.
All of those factors are trumped by one monster of a business risk facing public cloud service providers and customers today. How should they manage the security of inherently private data.
Many organizations have little to no idea of how to approach data classification, risk assessment, and risk mitigation. Even the largest organizations of the critical infrastructure industries are struggling with the pace of change, so it’s no surprise that everyone is else behind on this topic. Additionally, the legal and regulatory systems around the world are still learning how to respond to these topics.
Outsourcing the processing, storage, and/or protection of inherently private data does not relieve an organization from it’s responsibilities to customers, auditors, regulators, investors or other parties who may have a valid interest.
Standards, regulations, and customer expectations are evolving. What seems reasonable and prudent to an operations manager in a mid-sized organization might appear negligent to an auditor, regulator, or jury. What seems ok and safe today could have disastrous consequences down the road.
Unless your organization is well versed in data classification and protection, and has the ability to verify a service providers operational practices, I strongly recommend approaching Public Cloud services with extreme caution.
If your organization is not inherently part of the public web services “eco-system”, it would be prudent to restrict your interactions with Public Cloud computing to “inherently public” services such as media distribution, entertainment, education, marketing, and social networking. At least until the world understands it a bit better.
The costs of processing and storage private data will continue to get cheaper. If you’re not able to handle your private data needs in house there are still plenty of colocation and hosting services to consider. But before you start outsourcing, do some thoughtful housekeeping. Really, if your organization has private data which does not provide enough value to justify in house processing, storage, and protection… please ask yourselves why you even have this data in the first place.
It is truly amazing the number of companies who go on consumeristic shopping sprees buying so called “COTS packages” in hopes of instant gratification.
The marketing wordsmiths of the software industry have achieved great results in convincing folk the definition of COTS is something like:
Short for commercial off-the-shelf, an adjective that describes software or hardware products that are ready-made and available for sale to the general public. For example, Microsoft Office is a COTS product that is a packaged software solution for businesses. COTS products are designed to be implemented easily into existing systems without the need for customization.
Sounds great doesn’t it. Here is a portion of an alternate definition which rarely makes it into the marketing brochures:
“typically requires configuration that is tailored for specific uses”
That snippet is from US Federal Acquisition Regulations for “Commercial off-the-shelf” purchases.
In other words, most of the COTS packages should at least come with a “some assembly required” label on the box. Granted, most vendors do disclose the product will need some configuration. But most gloss over the level of effort involved, or sell it as another feature. And most organizations seem to assign procurement decisions to those least able to accurately estimate implementation requirements.
The most offensive of these scenarios involves developer tools and prepackaged application components for software development shops. SDKs and APIs are not even close to being a true COTS product, but numerous vendors will sell them to unsuspecting customers as “ready to use” applications.
If the organization has a team of competent software developers… then really, what is the point of purchasing a “COTS” package which requires more customization (through custom software development) than just developing the features internally?
Some vendors have sold the idea that they provide additional benefits you wouldn’t get from developing it internally. Such as:
Those are all suspect.
Failed software implementations can drive a company into the ground. Complex COTS packages which only serve as a component to be “integrated” into customer systems through custom programming can often be a major contributing factor to project/program failures. The larger the failure, the less likely the organization can retain sufficient stakeholder trust to try again.
Organizations with existing capabilities for large scale internal software development should reconsider the mantra of “All COTS, all the time, everywhere.”
US corporate financial practices haven’t just indoctrinated the citizenry into consumerism. They’ve equally indoctrinated organizations of all kind. Before you make that next COTS purchase order, pause, and give a moments consideration to “producerism”. The long term benefits could be startling.
By the way, this phenomenon isn’t limited to software components. I’ve seen organizations procure “appliances” at six figure costs because they perceived it to provide an application service which would save them $1 or $2 Million in software development costs downstream. Unfortunately, they eventually learned it would require an additional $2 to $5 Million of software development costs to modify their application portfolio to work with these appliances. After spending (wasting) 18 months and over $1 Million, they eventually found a solution they implemented internally with very little cost (simply replaced an old/deprecated programming language API with a newer one).
I think the Internet saw Microsoft’s new baby and vomited.
-from the Department of What Could Possibly Go Wrong?
Just read something on another blog that left we with one of those Wow/Aha feelings…
“Google yourself from time to time to get your mail.”
Once upon a time folk in this country could post a letter to someone for “general delivery” at which post office the intended recipient might be expected to pass by during their journeys. Upon arrival at a new town, a traveler would just pop into the local post office and ask if their was anything waiting for them. I even used this once myself for something to large to fit in a mailbox.
As we watch the US Postal Service begin closing locations, it never occurred to me to wonder how someone might replace the concept of “general delivery”. But the above referenced blog instruction demonstrates that the Internet can indeed handle general delivery just as well as email. Pretty cool.
Shoe Rental: Adults: $2.00. Seniors and Children: $2.00.
Hours: Sun-Thurs: 10:00 AM - Closing. Fri-Sat: 8:00 AM - Closing.
From the USDA’s June 13, 2001 press release on CRP enrollments:
ITSM 2011-01-25: New words for old concepts