RSS

Category Archives: Cloud

A new direction DivConq File Transfer

In the summer of 2011 I (Andy) started posting about DivConq Framework, our own little open source project. In 2011 and 2012 the focus of the framework was a Java connector to MUMPS nosql database. In 2013 my focus shifted due to customer demands and now the DivConq Framework has evolved into a File Transfer Framework.

This change is not a surprise really, since Jonathan and my professional expertise is in the file transfer industry.

Although the product is fledgling at present we believe it presents something more than a “me too”. First off it is a open source which is rare in the Enterprise class file transfer portfolio. Second the design goal is to be best of breed. Third we plan to keep it as simple as possible.

We’ll be posting more about the new Managed File Transfer (MFT) product we plan to develop so check back. In the mean time enjoy the latest demo, source code and Wiki on GitHub.

DivConqMFT on GitHub

 

Posted by on 2014-Sep-09 in Cloud, DivConq, Elastic Architecture, Framework, Gateway

Comments Off

Microsoft Announces “Orleans” – a New Cloud Framework

Microsoft’s eXtreme Computing Group hit an interesting ball in play with the announcement of their “Orleans” cloud framework.  In the announcing blog, the authors write:

Orleans is a software framework for building client + cloud applications. Orleans encourages use of simple concurrency patterns that are easy to understand and implement correctly, building on an actor-like model with declarative specification of persistence, replication, and consistency and using lightweight transactions to support the development of reliable and scalable client + cloud software.

The programming model advanced by Orleans is built on “grains”: small application instances that each take one set of external inputs and then concentrate on completing the task initiated by the external inputs before turning to a second set of inputs.  Grain computations are isolated, except when they commit changes to persistent storage and make them globally visible.

Basic load balancing of work is handled by the Orleans runtime, which activates grains by choosing a server from any within the available cloud, instantiating a grain, and initializing it with the grain’s persistent state.  Pointers to active grains, scalable into the billions, are maintained in a distributed directory based on technologies such as Pastry hash tables and Beehive-like caching.

Orleans’ elastic architecture explicitly handles entry-level bottleneck issues such as central databases by using data replication.  However, it eschews the “eventual consistency” model used by Cassandra and others in favor of a system of “lightweight, optimistic transactions” that provide durability and atomic persistence.

Orleans is a library written in C# that runs on the Microsoft .NET Framework 4.0.

More information is available directly from the authors in a PDF here:
http://research.microsoft.com/apps/pubs/?id=141999

 

Posted by on 2010-Dec-12 in Cloud, Elastic Architecture, Orleans, Other Organizations

Comments Off

Microsoft Jumps On Columnar Cloud Bandwagon, Provides Cloud Escrow

When we’re in a technical conversation about Business Intelligence (BI), the question about “which database do you use for BI” invariably comes up.  Whatever the database name is, chances are that the type of database will be described as “columnar“.  If you’re a frequent reader of this site, you may know that columnar and “NoSQL” databases are kissing cousins, and that we’re big fans of the Cassandra NoSQL database in these parts (though we advocate some tweaks).

We’re confident in our positions, but every once in a while its good to hear that we’re not just bleeding edge iconoclasts.  Today, Microsoft provided that reassurance when it announced its “Apollo” initiative to the masses.

In a Gavin Clarke interview published in The Register, Quentin Clark, general manager of the Microsoft SQL Server Database Systems Group, talks about, “new columnar technology called Apollo,” which Clark claimed could boost certain queries by between 10 and 50 times.

Other people were also struck by the new Apollo technology during a keynote Microsoft provided during the PASS Summit on Nov 9.  Here’s one blogger reacting:

” This is a great demo. We’re seeing a trillion rows per minute, filtered & reported on. It’s very slick. This is good. Same technology is also in the database engine. We’re seeing fantastic performance. I might be out of a job. It’s based on the columnar data store technology. It’s a very good thing.”

If you want to see the demo yourself, pull up this page in IE (you need Windows Media Player) and fast forward to about this point.

Though additional details on Apollo are sketchy so far, chances are that the fog will be lifted when the latest preview of Denali (the code name for the next version of SQL Server) is sent to subscribers on MSDN and TechNet, as Microsoft is promising near-parity of its on-premises and cloud-based SQL Server offerings.

Not lost on DivConq is the fact that by providing this level of parity between on-premises and cloud-based offerings Microsoft is giving its customers the ability to choose and later change their deployment models.  In other words, Microsoft is making cloud escrow a reality.  Who said they were evil?

 

Posted by on 2010-Nov-11 in Azure, Cassandra, Cloud, Other Organizations

Comments Off

Microsoft’s New Cloud Strategy: Let’s Support Java

OK, so there was no DivConq in April 2010, but if there was, we would have posted an article about VMForce, the Java-based strategic alliance between Salesforce.com and VMware.   This move allowed developers to host Spring- and Tomcat-based Java applications on top of (Sales)Force.com services.

There’s also Amazon’s Java option, which is essentially pull up a Linux image and run your Java apps on it – now sometimes for free.

With so much of the cloud rushing to embrace Java, Microsoft took the unusual step of promising an open Java platform on its Azure cloud in 2011 at its own PDC (as reported by mul ti ple sources).

According to eWeek’s Darryl Taft, Microsoft promises that, “this process will involve improving Java performance, Eclipse tooling and client libraries for Windows Azure. Customers can choose the Java environment of their choice and run it on Windows Azure. Improved Java Enablement will be available to customers in 2011.”

Amitabh Srivastava, senior vice president of Microsoft’s Server and Cloud Division was similarly quoted. “The further we got into this journey into the cloud, we saw that more and more people were writing cloud applications in Java.  There are three things we need to do. One is tooling; we’re going to make the whole Eclipse integration with Azure be first class. Second is we’re going to expose the APIs in Windows Azure in Java. And third we’re investing in optimizing the performance of Java applications on Windows Azure.”

Java in “the .NET cloud”?  Of course, Java’s been supported in Azure for a long time, but it’s certainty not been accorded first class status.  TheRegister’s Gavin Clarke wonders if a race to the bottom in price, as well as developer accessibility, was the real driver behind this unusual move.

What’s also interesting to long time developers was that “Visual Studio” wasn’t mention in the same breath as”Eclipse”, leaving one to wonder if the “Eclipse tooling” represents a new frontier in Microsoft’s vaunted “embrace and extend” strategy.

 

Posted by on 2010-Nov-11 in Amazon EC2, Azure, Cloud, Elastic Architecture

Comments Off

Intel launches bizarre “Open Data Center Alliance”

In April Intel acquired McAfee – the “Avis” of the anti-virus world to Symantec’s “Hertz” – for $7.7 billion dollars.    The general response in the IT community was “WTF?

Now, Intel may have done it again by announcing  an “Open Data Center Alliance” (ODCA) that’s all about the cloud…without any support from the cloud vendor community.

“Vendors will not be members,” said Alliance steering committee member Mario Muller.

Intel’s ODCA has some laudable goals, including “federation” of cloud technology through common standard and the avoidance of vendor lock-in.  It also advocates automatic and intelligent scaling of elastic resources – akin to the “elastic architecture” we advocate on this blog.

However, without any technology or cloud services to back it up, the ODCA initiative comes across as a half-hearted “Intel Inside 2.0″ – maybe even the beginning of the end of a brand that rose with the PC-based datacenter and may fall with the cloud.

According to a recent TheRegister article, Kirk Skaugen, general manager of Intel’s data center group indicated that Amazon and other large cloud outfits have been asked to join ODCA, and he admitted that “there’s absolutely no way we can get to where we get where we want to be without [the big-name cloud companies].”

So how far has Intel fallen that they can announce a party promising $50 billion of captive IT spending door prizes and get stiffed by every major cloud vendor?   Maybe it was the guest list, but I don’t think so this time.    For the ODCA to succeed, Intel needs a strategic partner or two and they need them quickly.

 

Posted by on 2010-Oct-10 in Cloud, Elastic Architecture, Other Organizations

Comments Off

Cloud Security Alliance’s Certificate of Cloud Security Knowledge (CCSK) Now Available

Today the Cloud Security Alliance announced that their new Certificate of Cloud Security Knowledge (CCSK) was now available.   This exciting certificate tests awareness of cloud security threats and best practices for securing the cloud.  The material covered in the one-hour, 50-question examination is largely encapsulated in two documents: “Security Guidance for Critical Areas of Focus in Cloud Computing” by the Cloud Security Alliance and the European Network and Information Security Agency (ENISA) whitepaper “Cloud Computing: Benefits, Risks and Recommendations for Information Security”.

Among the companies planning to certify their employees as CCSKs are eBay, ING, Lockheed Martin, Sallie Mae, Zynga, CA, CaseCentral, HCL Technologies, Hubspan, LogLogic, Fiberlink, McAfee, Ping Identity, Novell, Qualys, Solutionary, Symantec, Trend Micro, Veracode, VeriSign, Vordel, WhiteHat Security and Zscaler.

 

Posted by on 2010-Sep-09 in Cloud, Regulation

Comments Off

Google leader says cloud deployment is not the complete answer

At first I was tempted to not post the analysis performed by Google’s Vijay Gill which concludes that an on-premise deployment may be cheaper than an Amazon cloud deployment if usage is high and constant.

It’s not because I don’t agree, but because the statement seems quite obvious to anyone coming from with any operational background.

In general, if demand is constant and predictable, it makes sense to apply fixed resources such as in-house servers or full time employees against the problem.  If demand is variable or unpredictable, it makes sense to invest more in variable resources such as use-as-you-go cloud resources and seasonal employees.

But ultimately I decided to post Gill’s analysis because it allows me to remind people that most demand models have both a fixed and variable component.

Furthermore, if fixed on-premises and variable cloud deployment models are the resources of the future, doesn’t it make sense to design hybrid applications today that span across both types of resources: your in-house datacenters and the cloud?

Elastic architecture provides you the scalability you need to span; cloud escrow provides you the ability to choose your deployment model.  Stay tuned to DivConq as we continue to explore these concepts and the technologies behind them.

 

Posted by on 2010-Aug-08 in Amazon EC2, Cloud

Comments Off

Elastic Architecture

“Elastic architecture” is a concept you will read about more frequently as time goes on. It refers to computer architecture designed such that applications with different roles in different tiers of an application can each intelligently (and elastically) scale up or down to meet processing requirements.

We are not the first people to name this concept. Yahoo.com’s Eran Hammer-Lahav talked about elastic architecture in an August 2007 blog post. In this post he discussed two intersecting themes: applications that could scale themselves, and tiered deployments that rely on a mix of caching, acceleration and replication to keep up with the layers that are horizontally scaling to meet the current load.

Software architect and trainer Simon Brown also came close to naming this concept in a May 2008 blog post. In this post he talked about a “cloud (that) could migrate your data/apps automagically, depending on where they were being accessed from”. This certainly seems like an application that would require multiple layers to intelligently scale horizontally in multiple geographic locations; that’s an example of elastic architecture.

As you probably know by now, DivConq’s main goal is to promote the adoption of highly scalable, cloud-portable technology in multiple tiers of an application. (For example, using Cassandra as the data store at the same time you’re using an application layer built on an array of high-throughput web servers.) Now that goal has a name and we’re proud to promote the adoption of elastic architecture throughout the IT industry.

 

Posted by on 2010-Aug-08 in Cloud, Elastic Architecture

Comments Off

Cloud Escrow: The Ability to Choose and Change Your Deployment Model

In a recent TheRegister post entitled “The cloud’s impact on security”, Tony Lock provides a definition for the groundbreaking concept of “Cloud Escrow”.

“…if you are using external cloud resources, look at how the data and any intellectual property invested in the processing engines employed to manipulate data can be moved to other third party cloud providers, or back into the enterprise, if you need to do that. You could call this ‘Cloud Escrow’.”

Readers of DivConq and other cloud technology blogs are probably already familiar with the term “Cloud Portability” – the ability to move cloud applications and data between different cloud providers or to receive the same services from multiple cloud providers at once.

However, what Lock does with his “Cloud Escrow” definition is remind people that the ability for companies to redeploy entire sets of cloud-deployed applications or data back into company-owned systems or private clouds is extremely important in case:

  • a merger or divestiture impacts IT service delivery
  • a regulatory change or legal ruling requires quick action
  • currently contracted cloud vendors are acquired by a questionable owner or are unable to meet their service level agreements (SLAs) with current ownership

DivConq applauds Lock’s contribution of “Cloud Escrow” to the ongoing discussions being held at every level about the appropriate way to deploy resources into the cloud.   The answer, as always, is to have a realistic fallback plan in case conditions change in a hurry.

 

Posted by on 2010-Jul-07 in Cloud, Regulation

Comments Off

Google Joins Microsoft In GeoPolitical Private Cloud Deployment

While attending the RSA conference in San Francisco this year I wrote a brief article for another blog about Microsoft establishing a geopolitical private cloud for U.S. government use.  At that time I wrote:

As noted by Gavin Clark in The Register:
http://www.theregister.co.uk/2010/02/27/microsoft_government_cloud/
“Among the features (in Microsoft’s latest U.S. government cloud offerings) are secured and separate hosting facilities access, to which is restricted to a small number of US citizens who have cleared rigorous background checks under the International Traffic in Arms Regulations (ITAR).”

In other words, Microsoft has defined a large private cloud segment that will never span political boundaries.   However, not every Federal process must comply with ITAR or even the higher levels of FISMA.  It will be interesting to see whether other cloud vendors follow suit with their own private offerings or if private government clouds restricted to and maintained in a single country are just a niche.

As predicted, Google has taken Microsoft’s lead here.  TheRegister’s Cade Metz notes this in  today’s article entitled “Google Apps rubber-stamped for use by US gov”.

The new service segregates Gmail and Google Calendar data in a section of Google’s back-end infrastructure that’s separate from services used by non-government users, and all the data centers housing these segregated applications are located in the continental United States. Google says that in the future, it will segregate other applications in this way.

So…that’s two major cloud vendors getting behind permanent private clouds delineated by geopolitical boundaries.  Who’s next?

 

Posted by on 2010-Jul-07 in Cloud, Other Organizations, Regulation

Comments Off