Καλώς ορίσατε στο dotNETZone.gr - Σύνδεση | Εγγραφή | Βοήθεια

Windows Azure SDK 1.4 was released yesterday with no breaking changes and a lot of stability fixes. You can get the bits from here. Among an important fix, specially if you use a source control system, where the web.config was being locked but Windows Azure Tools still needed write access to update the machine key, there are new features for the CDN (quoting from the release announcement):

Windows Azure CDN for Hosted Services
Developers can use the Windows Azure Web and VM roles as “origin” for objects to be delivered at scale via the Windows Azure Content Delivery Network. Static content in your website can be automatically edge-cached at locations throughout the United States, Europe, Asia, Australia and South America to provide maximum bandwidth and lower latency delivery of website content to users.

Serve secure content from the Windows Azure CDN
A new checkbox option in the Windows Azure management portal to enable delivery of secure content via HTTPS through any existing Windows Azure CDN account.

Get the bits and enjoy, I'm already updated :)



0 σχόλια
Δημοσίευση στην κατηγορία:

I posted an article at CodeProject explaining how you can use Windows Azure AppFabric Cache (CTP) in your applications.

You can find the article here -> http://www.codeproject.com/KB/azure/WA-AppFabric-cache.aspx

Please let me know if you liked or not and of course any comments are more than welcome!

Thank you,


0 σχόλια
Δημοσίευση στην κατηγορία: ,

A lot of interesting things have been going on lately on the Windows Azure MVP list and I'll be try to pick the best and the ones I can share and make some posts.

During an Azure bootcamp another fellow Windows Azure MVP, had a very interesting question "What happens if someone is updating the BLOB and a request come in for that BLOB to serve it?"

The answer came from Steve Marx pretty quickly and I'm just quoting his email:

"The bottom line is that a client should never receive corrupt data due to changing content.  This is true both from blob storage directly and from the CDN.
The way this works is:
·         Changes to block blobs (put blob, put block list) are atomic, in that there’s never a blob that has only partial new content.
·         Reading a blob all at once is atomic, in that we don’t respond with data that’s a mix of new and old content.
·         When reading a blob with range requests, each request is atomic, but you could always end up with corrupt data if you request different ranges at different times and stitch them together.  Using ETags (or If-Unmodified-Since) should protect you from this.  (Requests after the content changed would fail with “condition not met,” and you’d know to start over.)
Only the last point is particularly relevant for the CDN, and it reads from blob storage and sends to clients in ways that obey the same HTTP semantics (so ETags and If-Unmodified-Since work).
For a client to end up with corrupt data, it would have to be behaving badly… i.e., requesting data in chunks but not using HTTP headers to guarantee it’s still reading the same blob.  I think this would be a rare situation.  (Browsers, media players, etc. should all do this properly.)
Of course, updates to a blob don’t mean the content is immediately changed in the CDN, so it’s certainly possible to get old data due to caching.  It should just never be corrupt data due to mixing old and new content."

So, as you see from Steve's reply, there is no chance to get corrupt data, unlike other vendors, only old data.


0 σχόλια
Δημοσίευση στην κατηγορία: ,

A fellow Windows Azure MVP, Rainer Stroper, had a very interesting case recently were he got a "reached quota" message for his SQL Azure database, although the query was indicating he was using about ~750MB on a 1GB size Web Edition database.

The problem was narrowed done to a bug in the documentation ( http://msdn.microsoft.com/en-us/library/ff394114.aspx) and the correct one to use is this, as per Microsoft's Support suggestion:

SELECT SUM(reserved_page_count)*8.0/1024 + SUM(lob_reserved_page_count)*8.0/1024 FROM sys.dm_db_partition_stats

in order to take accruate metrics.

Be sure you use that, so you won't have any unpleasant suprises.


0 σχόλια
Δημοσίευση στην κατηγορία:

This week, I've joined Devoteam Belgium as a .NET Consultant and Microsoft Trainer, focused on Windows Azure and Sharepoint and of course Trainings. It's been a great starting week, meet a lot of interesting people and found out that there are other MVPs working around me:

  • Serge Luca (MVP Sharepoint , Microsoft Certified Trainer)
  • Didier Danse (MVP Sharepoint)
  • Kurt Roggen (MVP Management Infrastructure , Microsoft Certified Trainer)
  • Which is really really cool.

    I'm looking forward to missions and trainings and I'm really happy to be here. Great people, great environment and a lot of positive vibe in the air. What can you ask more?


    0 σχόλια
    Δημοσίευση στην κατηγορία:

    Amazon announced the AWS Free Usage Tier (http://aws.amazon.com/free/) last week, which will start from November the 1st. I know some people are excited about this announcement and so am I because I believe that competition between cloud providers always brings better service for the customer, but in Amazon's case, it's more like a marketing trick than a real benefit and I'll explain why during this post. Let me remind you at this point that this is strictly a personal opinion. Let me also say that I have experience on AWS too.

    Certainly, having something free to start with is always nice, but what exactly is free and how does it compare to Windows Azure platform? First of all, Windows Azure has a similar free startup offer, called Introductory special which gives you free compute hours, storage space and transactions, a SQL Azure web instance, AppFabric connections and ACL transactions, free traffic (inbound and outbound), all at some limit of course. Then there is the BizSpark program, which gives you also a very generous package of Windows Azure Platform benefits to start developing on and of course let's not forget the MSDN Subscription Windows Azure offer which is even more buffed up than the others.

    Ok, I promised the Amazon part, so here it is. AWS billing model is different from Windows Azure. It's very detailed, a lot of things are broken into smaller pieces, each one of them being billed in a different way. Some facts:

    • Load balancing in EC2 instances, it's not free. Not only you pay compute hours but you're also charged for traffic (GB) that went through your balancer. Windows Azure load balancing is just there and it just works and of course you don't pay compute hours and traffic just for that.
    • On EBS you're charged for every read and write you do (I/O), charged for the amount of space you use, snapshot size counts not in the total but on its own and you're also charged per snapshot operation (Get or Put). On Windows Azure Storage you have 2 things. Transactions and amount of space you consume. Also on snapshots only your DELTA (differences) is counted against your total, not the whole snapshot.
    • SimpleDB is charged per machine hour* consumed and GBs of storage. Windows Azure Tables you only pay your storage and transactions. You might say that I have to compare this to S3, but I don't agree. S3 is not close to Windows Azure Tables as SimpleDB is. What is even more disturbing on S3 is the fact that there is a durability guarantee of 99.99% which actually means you can lose (!!) data of 0.01%.
    • There is no RDS instance (based on MySQL) included in the free tier. With introductory special you get a SQL Azure Web database (1GB) for 3 months or for as long as you have a valid subscription when you're using the MSDN Windows Azure Offer where you actually get 3 databases.

    For me, the biggest difference is the development experience. Windows Azure offers a precise local emulation of the cloud environment on your development machine, called DevFabric which ships with Windows Azure Tools for VS2008/VS2010. All you have to do, is click F5 on your Cloud project and you get the local emulation on your machine, to test, debug and prepare for the deployment. Amazon doesn't offer this kind of development environment. There is integration with Eclipse and other IDEs but every time you hit the Debug button, you're actually hitting real web services with your credentials, consuming from your free tier and as soon as you're done consuming that you start paying to develop and debug. Free tier is more like a "development tier" for me. Windows Azure offers you both, the development experience you expect without any cost on your local machine with DevFabric and a development experience on the real cloud environment where you can deploy and test  your application also without any cost, unless of course you consume your free allowance.

    Some may say you can't compare AWS to Windows Azure, because they are not the same. AWS is mostly IaaS (Infrastructure as a Service) and Windows Azure is PaaS (Platform as a Service) and I couldn't agree more. But what I'm comparing here are features that already exist on both services. I'm not comparing EC2 instances sizes to Windows Azure instances sizes but I'm comparing the Load Balancing, SimpleDB etc.

    * Machine hour is a different concept to compute hour and it's beyond the scope of this post.

    Thank you for reading and please feel free to comment.


    0 σχόλια
    Δημοσίευση στην κατηγορία: , ,

    It's been a long time, since I posted something here, almost 3 months. Thing is that I had some long lasting vacations during August, I moved to a new company, even a new country. Trust me, no time for blogging. To catch up a little bit:

    1. I was awarded with the Windows Azure MVP title.
    2. I'm speaking at the keynote, along with some of the best it pros in Greece, of ITPro|DevConnections 2010, a huge technological event being held in Greece, 27th and 28th of November. It's almost sold out at the moment, more than 280 participants.
    3. I moved to Brussels for at least 6 months, for a project.
    4. I'll be at TechEd 2010 Berlin helping out Brian H. Prince on Windows Azure Hands-on-Labs/trainings etc. It will be nice to meet with some people there, come and find me!
    5. Lots of other stuff..

    I'll catch up with blogging next week. There are a lot of stuff I want to write about. Be sure to check my blog every now and then, or subscribe to my RSS feed. Right after ITPro|DevConnections 2010 I'll post slides and code and trust me, it's going to be mind blowing. IT Pro guys and Developer guys united = pure magic :)

    Take care all!


    1 σχόλια
    Δημοσίευση στην κατηγορία:

    Recently, I’ve been looking a way to persist the status of an idling Workflow on WF4. There is a way to use SQL Azure to achieve this, after modifying the scripts because they contain unsupported T-SQL commands, but it’s totally an overkill to use it just to persist WF information, if you’re not using the RDBMS for another reason.

    I decided to modify the FilePersistence.cs of the Custom Persistence Service sample in WF 4 Samples Library and make it work with Windows Azure Blob storage. I’ve created two new methods to Serialize and Deserialize information to/from Blob storage.

    Here is some code:

       1: private void SerialiazeToAzureStorage(byte[] workflowBytes, Guid id)
       2:      {
       3:          var account = CloudStorageAccount.FromConfigurationSetting("DataConnectionString");
       4:          var container = account.CreateCloudBlobClient().GetContainerReference("workflow_persistence");
       6:          var blob = container.GetBlobReference(id.ToString());
       8:          blob.Properties.ContentType = "application/octet-stream";
       9:          using (var stream = new MemoryStream())
      10:          {
      11:              stream.Read(workflowBytes, 0, workflowBytes.Length);
      12:              blob.UploadFromStream(stream);
      13:          }
      14:      }
      16:      private byte[] DeserialiazeFromAzureStorage(Guid id)
      17:      {
      18:          var account = CloudStorageAccount.FromConfigurationSetting("DataConnectionString");
      19:          var container = account.CreateCloudBlobClient().GetContainerReference("workflow_persistence");
      21:          var blob = container.GetBlobReference(id.ToString());
      23:          return blob.DownloadByteArray();
      24:      }

    Just make sure you’ve created “workflow_persistence” blob container before using these methods.


    0 σχόλια
    Δημοσίευση στην κατηγορία: , ,

    Once again, Microsoft proved that it values its customers, either big enterprise or small startups. We’re a small private-held business and I personally have a major role in it as I’m one of the founders. Recently, I’ve been giving some pretty nice presentations and a bunch of sessions for Microsoft Hellas about Windows Azure and Cloud computing in general.

    I was using my CTP account(s) I have since PDC 08 and I had a lot of services running there from times to times all for demo purposes. But with the 2nd commercial launch wave, Greece was included and I had to upgrade my subscription and start paying for it. I was ok with that, because MSDN Premium subscription has 750 hours included/month, SQL Azure databases and other stuff included for free. I went through the upgrade process from CTP to Paid, everything went smoothly and there I was waiting for my CTP account to switch on read-only mode and eventually “fade away”. So, during that process, I did a small mistake. I miscalculated my instances running. I actually missed some. That turned out to be a mistake that will cost me some serious money for show-case/marketing/demoing projects running on Windows Azure.

    About two weeks ago, I had an epiphany during the day and I was like “Oh, crap.. Did I turned that project off? How many instance do I have running?”. I logged on the billing portal and, sadly for me, I was charged like 4500 hours because of the forgotten instances and my miscalculation. You see, I’ve did a demo about switch between instance sizes and I had some instances running like big VMs. That’s four (4) times the price per hour.

    It was clearly my mistake and I had to pay for it (literally!). But then I tweeted my bad luck to help others avoid the same mistake and the thing I was been warning my clients all this time and some people from Microsoft got interested in my situation, I explained what happened and we ended up in a pretty good deal just 3 days after I tweeted. But, that was an exception and certainly DON’T count on it.

    Bottom line is be careful and plan correctly. Mistakes do happen but the more careful we are, the more rare they will be.

    * I want to publicly say thank you to anyone who was involved in this and helped me sort things out so quickly.


    0 σχόλια
    Δημοσίευση στην κατηγορία: , ,

    Paspartu is French for “one size fits all”. Recently I’ve been coming across posts explaining and “promoting” the idea of spawning threads inside a worker role each one of them with a unique work to be done. All are sharing the same idea and all of them are describing the same thing.

    The idea

    You have some work to do, but you want to do it with the most efficient way, without having underutilized resources, which is one of the benefits of cloud computing anyway.

    The implementation

    You have a worker process (Worker Role on Windows Azure) which processes some data. Certainly that’s a good implementation but it’s not a best practice. Most of the time, your instance will be underutilized, unless your doing some CPU and memory intensive work and you have a continuous flow of data to be processed.

    In another implementation, we created a Master-Slave pattern. A master distributes work to other slave worker roles, roles are picking up their work, do their stuff, return result and start over again. Still, in some cases that’s not the best idea either. Same cons as before. Underutilized resources, high risk of failure. If the master dies, unless properly designed, your system dies. You can’t process any data.

    So, another one appeared. Inside a worker role, spawn multiple threads, running their own processes or methods, doing their work and return result. Underutilization is minimized, Thread Pool is doing all the hard work for us and as soon as .NET 4.0 is supported on Windows Azure, parallelization is easy and, allow me to say, mandatory. But what happens if the worker instance dies? Or restarts? Yes, your guess is correct. You lose all threads and all the processing done by that moment, is lost, unless you persist it somehow. If you had multiple instances of your worker role to imitate that behavior, that wouldn’t happen. You’ll only lose data from the instance that died.

    As Eugenio Pace says “You have to be prepared to fail” and he’s right. Every single moment, your instance can die, without a single notice and you have to be prepared to deal with it.

    Oh, boy.

    So really, there is no single solution or best practice. For me, it’s best guidance. Depending on your scenario, one of the solutions above or even a new one, can fit better for you than for others. Every project is unique and has to be treated as such. Try to think out of the box and remember that this is deep water for everyone. It is just some of us swim better..


    0 σχόλια
    Δημοσίευση στην κατηγορία: , ,


    Every single time something new emerges in the IT market, there are three distinct categories of people: Early adopters, Late majority, Skeptics. Each one of them has its own reasons and its own concerns about when, why and if they are going to adapt to this new technology, either completely or in a hybrid mode.

    All of them have some things in common. They share the same concerns about security, billing, taxes, availability, latency and probably some others.

    Concerns and my thoughts.

    Billing is something that can easily be solved and explained by clever and good marketing. Unfortunately, there is no such thing as local billing. Cloud computing services are truly global and the same billing model, the same prices, the same billing methods have to be used to provide a fair and consistent service to every customer. This has to change. For some markets, a service can be really cheap, but for some others can be really expensive. Increasing the price in some countries and decreasing in some others can make the service more fair and more adoptable. Using a credit card to identify the country is a good method, but there is a problem. It’s called taxes.

    Taxes is a way for a government to make money. In many countries, Greece being one of them, having a credit card with a decent limit it’s a privilege. Unfortunately I mean it in a bad way. Interest is quite high and with such an unstable tax policy you can never be sure if there won’t be any extra fees you might have to pay sooner or later. But I guess this is not only our problem but for some other countries too, mostly emerging markets. Providing another way of paying monthly fees for service usage, can easily overcome this.

    Security. Oh yes, security. Countless questions during presentations and chats are about security. Tons of “what if”. Yes, it’s a big issue. But too much skepticism is never good. I believe people are not worried about security issues like data leakage/stolen/etc. There are worried because they somehow lose control of their information. At least this is what they believe. The idea that their data are not stored in their own hardware but somewhere else and not even in the same country, terrifies them. I’m not sure if there is anything that can be done to subdue this concern but at least, there can be some localized data centers for example, banks were regulatory laws demand data to be stored in the same country, if not on-premises owned by the bank. Private cloud could probably meet those regulations.

    Latency. That’s an easy one. Its principal is the same as security. My data are over there and there might be a significant latency until I get a response. Yes there is a delay no it’s not that big, probably somewhere between 60 to 100 ms. For applications that are not real time, this is really really low. You can even play shoot’em’up games with 100ms latency. The only thing we can do, is have a requirement for a decent DSL line from our customers in case our application, locally installed, is accessing a cloud service. Also picking the right region to deploy our application can have a significant impact on latency.

    Availability. People are worried about their data not being available when most needed. The further their data are, the more points of failure. Their internet line, their ISP line, a ship cutting some cables 4000km away. Most, if not all, cloud service providers provide 3 or 4 “nines” of uptime and availability, but there are a lot of examples of services failing from unpredicted code or human errors (eg Google). Other companies have proved more trustworthy and more reliable.


    Concluding this post, I want to make something clear. I’m not part of those distinct groups of people. I started playing with cloud computing services right after Amazon removed beta label from its AWS service, back in 2008 (April if I recall correct), with Windows Azure following at PDC ‘08. I had my first token back then and started playing with it. I’ve seen Windows Azure shape and change within those two years in something amazing and really ground breaking. Windows Azure can successfully lower or even eliminate your concerns in some of the matters discussed above, but there is room from improvement and always will be. I’m going to dig a little deeper on those matters and try to provide more concrete answers and thoughts.

    Thank you for reading so far,


    0 σχόλια
    Δημοσίευση στην κατηγορία: ,


    Yes, it is.

    Table storage has multiple replicas and guarantees uptime and availability, but for business continuity reasons you have to be protected from possible failures on your application. Business logic errors can harm the integrity of your data and all Windows Azure Storage replicas will be harmed too. You have to be protected from those scenarios and having a backup plan is necessary.

    There is a really nice project available on Codeplex for this purpose: http://tablestoragebackup.codeplex.com/


    0 σχόλια
    Δημοσίευση στην κατηγορία: ,


    Well, I’m still here. I know it’s been like ages since my last post but believe me, I’ve been quite busy with tons of stuff and there was no time to blog.

    So, let’s catch up a little bit:

    1) I’ve been awarded the MVP title for 2010 on Visual C#. Thank you very much Microsoft.

    2) I’ve attended the Regional CEE MVP Summit that took place in Athens, Greece. Photos will be up soon.

    3) I started playing with Cassandra DB from Apache Foundation and I’m currently looking in a way to make it run on Windows Azure. I just started and I’ll keep blogging about it. It’s really cool and I hope it will work!

    That’s all for now!


    0 σχόλια
    Δημοσίευση στην κατηγορία:

    Just three days ago on Feb 13th, SQL Azure got an update. Long requested features like downgrade and upgrade between Web and Enterprise Edition is finally implemented. It’s easy, just with a single command to switch between versions. Also some DMVs were introduced to match on premise SQL Server. Idle session timeout was also increased.

    In details*:

    Troubleshooting and Supportability DMVs

    Dynamic Management Views (DMVs) return state information that can be used to monitor the health of a database, diagnose problems, and tune performance. These views are similar to the ones that already exist in the on-premises edition of SQL Server.

    The DMVs we have added are as follows:

    · sys.dm_exec_connections – This view returns information about the connections established to your database.

    · sys.dm_exec_requests – This view returns information about each request that executes within your database

    · sys.dm_exec_sessions – This view shows information about all active user connections and internal tasks.

    · sys.dm_tran_database_transactions – This view returns information about transactions at the database level.

    · sys.dm_tran_active_transactions – This view returns information about transactions for your current logical database.

    · sys.dm_db_partition_stats – This view returns page and row-count information for every partition in the current database.

    Ability to move between editions

    One of the most requested features was the ability to move up and down between a Web or Business edition database.   This provides you greater flexibility and if you approach the upper limits of the Web edition database, you can easily upgrade with a single command. You can also downgrade if your database is below the allowed size limit.

    You can now do that using the following syntax:

    ALTER DATABASE database_name


        MODIFY (MAXSIZE = {1 | 10} GB)


    Idle session timeouts

    We have increased the idle connection timeout from 5 to 30 minutes. This will improve your experience while using connection pooling and other interactive tools

    Long running transactions

    Based on customer feedback, we have improved our algorithm for terminating long running transactions. These changes will substantially increase the quality of service and allow you to import and export much larger amounts of data without having to resort to breaking your data down into chunks.

    * Source: MSDN Forums announcement

    Provide your feedback on http://www.mygreatsqlazureidea.com. There are some great features requested like the ability to automatically store a backup on BLOB storage.


    0 σχόλια
    Δημοσίευση στην κατηγορία: , ,

    Windows Azure configuration files support an osVersion attribute where you can set which version of the Windows Azure OS should run your service.

    This feature doesn’t make much sense at the moment as there is only one version WA-GUEST-OS-1.0_200912-01 but in the future it’s going to be very handy.

    You can learn more about it here.


    0 σχόλια
    Δημοσίευση στην κατηγορία: ,

    Recently at MSDN Forums there were people asking how they can detect if their web application is running on the cloud or locally (Dev Storage). Well besides the obvious part, if you have code inside a Web Role or a Worker Role Start() method, this only exists on a cloud template but what if you want to make that check somewhere else, for example inside a Page_Load method or inside a library (dll)?

    If you’re trying to detect it on the “UI” level, let’s say Page_Load, you can simply check your headers. Request.Headers["Host"] will do the trick. If it’s “localhost” or whatever you like it to be to can be used to determine if it’s running local.

    But how about a Library? Are there any alternatives?

    Well, it’s not the most bullet proof method, but it served me well until now and I don’t think it’s going to stop working as it’s a fundamental architecture element of Windows Azure. There are specific Environment properties that are raising a SecurityException as you’re not allowed to read them. One of them is MachineName. So, if Environment.MachineName is raising an exception then you’re probably running on the cloud. As I said it’s not bullet proof because if an IT administrator applies a CAS that restricts specific properties, it can still raise an exception but you get my point. A combination of tricks can give you the desired result.


    0 σχόλια
    Δημοσίευση στην κατηγορία: , ,

    You’re trying to create a queue on Windows Azure and you’re getting a “400 Bad Request” as an inner exception. Well, there are two possible scenarios:

    1) The name of the queue is not valid. It has to be a valid DNS Name to be accepted by the service.

    2) The service is down or something went wrong and you just have to re-try, so implementing a re-try logic in your service when initializing is not a bad idea. I might say it’s mandatory.

    The naming rules

  • A queue name must start with a letter or number, and may contain only letters, numbers, and the dash (-) character.
  • The first and last letters in the queue name must be alphanumeric. The dash (-) character may not be the first or last letter.
  • All letters in a queue name must be lowercase.
  • A queue name must be from 3 through 63 characters long.

    More on that here.

  • Thank you,

    0 σχόλια
    Δημοσίευση στην κατηγορία: ,

    Windows Azure training kit it the best starting point if you want to get involved in Azure development. It helps you understand the basics of Windows Azure, its components and the whereabouts of the service.

    December’s release includes some updates and samples from PDC 09 so don’t miss it.

    You can download the kit from here –> http://www.microsoft.com/downloads/details.aspx?FamilyID=413E88F8-5966-4A83-B309-53B7B77EDF78&displaylang=en



    0 σχόλια
    Δημοσίευση στην κατηγορία:

    Happy new year everybody. I wish you all the best for 2010. Now that I’m back I’m going to continue posting on Windows Azure. I had some really good rest during holidays and I really enjoyed xmas this time. I hope you did too! Smile


    0 σχόλια
    Δημοσίευση στην κατηγορία:

    One of the latest features introduced on SQL Azure is the abillity to apply firewall settings on your database and allow only specific IP ranges to connect to it. This can be done through SQL Azure Portal or through code using stored procedures.

    If you want to take a look at which rules are active on your SQL Azure database, you can use:

    select * from sys.firewall_rules

    That will give you a view of your firewall rules.

    If you want to add a new firewall rule, you can use the "sp_set_firewall_rule". The syntax is "sp_set_firewall_rule <firewall_rule_name> <ip range start> <ip range end>". For example:

    exec sp_set_firewall_rule N'My setting','',''

    If you want to delete that rule, you can use:

    exec sp_delete_firewall_rule N'My setting'


    0 σχόλια
    Δημοσίευση στην κατηγορία: , ,

    When you have your service running on Windows Azure, the least thing you want is monitoring every now and then and decide if there is a necessity for specific actions based on your monitoring data. You want the service to be, in some degree, self-manageable and decide on its own what the necessary actions should take place to satisfy a monitoring alert. In this post, I’m not going to use Service Management API to increase or decrease the number of instances, instead I’m going to log a warning, but in a future post I’m going to use it in combination with this logging message, so consider this as a series of posts with this being the first one.

    The most common scenario is dynamically increase or decrease VM instances to be able to process more messages as our Queues are getting filled up. You have to create your own “logic”, a decision mechanism if you like, which will execute some steps and bring the service to a state that satisfies your condition because there is no out-of-the-box solution from Windows Azure. A number of companies have announced that their monitoring/health software is going to support Windows Azure. You can find more information about that if you search the internet, or visit the Windows Azure Portal under Partners section.

    In the code below I’m monitoring the messages inside a Queue at every role cycle:

       1: CloudQueue cloudQueue = cloudQueueClient.GetQueueReference("calculateP");
       3: cloudQueue.CreateIfNotExist();
       4: cloudQueue.FetchAttributes();
       6: /* Call this method to calculate your WorkLoad */
       7: CalculateWorkLoad(cloudQueue.ApproximateMessageCount);

    and this is the code inside CalculateWorkLoad:

       1: public void CalculateWorkLoad(int? messages)
       2: {
       3:   /* If there are messages, find the average of messages 
       4:      available every X seconds
       5:   X = the ThreadSleep time, in my case every 5 seconds */
       6:   if (messages != null)
       7:    average = messages.Value / (threadsleep / 1000);
       9:   DecideIncDecOfInstances(average);
      10: }

    Note that if you want to get accurate values on queue’s properties, you have to call FetchAttributes();

    There is nothing fancy in my code I’m just finding an average workload (number of messages in my Queue) every 5 seconds and I’m passing this value at DecideIncDecOfInstances(). Here is the code:

       1: public void DecideIncDecOfInstances(int average)
       2: {
       3:     int instances = 2;
       5:     /* If my average is above 1000 */
       6:     if (average > 1000)
       7:         OneForEveryThousand(average, ref instances);
       8:         WarnWeNeedMoreVM(instances);
       9: }

    OneForEveryThousand count is actually increasing the default number of instances, which is two (2), by one (1) for every thousand (1000) messages in Queue’s average count.

    This is the final part of my code, WarnWeNeedMoreVM which logs our need for more or less VM’s.

       1: public void WarnWeNeedMoreVM(int instances)
       2: {
       3:     if (instances == 2) return;
       5:     Trace.WriteLine(String.Format("WARNING: Instances Count should be {0} on this {1} Role!", 
       6:                    instances, RoleEnvironment.CurrentRoleInstance.Role.Name), "Information");
       7: }

    In my next post for this series, I’m going to use the newly released Service Management API to upload a new configuration file which increases or decreases the number of VM instances in my role(s) dynamically. Stay tunned!


    0 σχόλια
    Δημοσίευση στην κατηγορία: , ,

    In general, there are two kind of updates you’ll mainly perform on Windows Azure. One of them is changing your application’s logic (or so called business logic) e.g. the way you handle/read queues, or how you process data or even protocol updates etc and the other is schema updates/changes. I’m not referring to SQL Azure schema changes, which is a different scenario and approach but in Table storage schema changes and to be more precise only on specific entity types because, as you already now, Table storage is schema-less. As in In-Place upgrades, the same logic applies here too. Introduce a hybrid version, which handles both the new and the old version of your entity (newly introduced properties) and then proceed to your “final” version which handles the new version of your entities (and properties) only. It’s a very easy technique and I’m explaining how to add new properties and of course remove although it’s a less likely scenario.

    During my presentation at Microsoft DevDays “Make Web not War”, I’ve created an example using a Weather service and an entity called WeatherEntry, so let’s use it. My class looks like this:

       1: [DataServiceKey("PartitionKey","RowKey")]
       2: public class WeatherEntry : TableServiceEntity
       3: {
       4:     public WeatherEntry()
       5:     {
       6:         PartitionKey = "athgr";
       7:         RowKey = string.Format("{0:10}_{1}", DateTime.MaxValue.Ticks - DateTime.Now.Ticks, Guid.NewGuid());
       8:     }
       9:     public DateTime TimeOfCapture{ get; set; }
      10:     public string Temperature{ get; set; }
      11: }

    There is nothing special at this class. I use two custom properties, TimeOfCapture and Temperature and I’m going to make small change and I’ll add “SchemaVersion” which is needed to achieve the functionality I want. When I want to create a new entry, all I do now is instantiate a WeatherEntry, set the values and use a helper method called AddEntry to persist my changes.

       1: public void AddEntry(string temperature, DateTime timeofc)
       2: {
       3:    this.AddObject("WeatherData", new WeatherEntry { TimeOfCapture = timeofc, Temperature = temperature, SchemaVersion = "1.0" });
       4:    this.SaveChanges();
       5: }

    I’m using TableServiceContext from the newly released StorageClient and methods like UpdateObject, DeleteObject, AddObject etc, exist in my data service context where AddEntry helper method relies. At the moment my Table schema looks like this:


    It’s pretty obvious there is no special handling during saving of my entities but this is about to change in my hybrid version.

    The hybrid

    I did some changes at my base class and I’ve added a new property. It’s holding the temperature sample area, in my case Spata where Athens International Airport is.

    My class looks like this now:

       1: [DataServiceKey("PartitionKey","RowKey")]
       2: public class WeatherEntry : TableServiceEntity
       3: {
       4:     public WeatherEntry()
       5:     {
       6:         PartitionKey = "athgr";
       7:         RowKey = string.Format("{0:10}_{1}", DateTime.MaxValue.Ticks - DateTime.Now.Ticks, Guid.NewGuid());
       8:     }
       9:     public DateTime TimeOfCapture{ get; set; }
      10:     public string Temperature{ get; set; }
      11:     public string SampleArea{ get; set; }
      12:     public string SchemaVersion{ get; set;}
      13: }

    So, this hybrid client has somehow to handle entities from version 1 and entities from version 2 because my schema is already on version 2. How do you do that? The main idea is that you retrieve an entity from table storage and you check if SampleArea and SchemaVersion have a value. If they don’t, put a default value and save them. In my case my schema version number has to be 1.5 as this is the default schema number for this hybrid solution. One key point to this procedure is before you upgrade your client to this hybrid, you roll-out an update enabling “IgnoreMissingProperties” flag on your TableServiceContext. If IgnoreMissingProperties is true, when a version 1 client is trying to access your entities which are on version 2 and have those new properties, it WON’T raise an exception and it will just ignore them.

       1: var account = CloudStorageAccount.FromConfigurationSetting("DataConnectionString");
       2: var context = new WeatherServiceContext(account.TableEndpoint.ToString(), account.Credentials);
       4: /* Ignore missing properties on my entities */
       5: context.IgnoreMissingProperties = true;

    Remember, you have to roll-out an update BEFORE you upgrade to this hybrid.

    Whenever I’m updating an entity to Table Storage, I’m checking its version Schema and if it’s not “1.5” I update it and put a default value on SampleArea:

       1: public void UpdateEntry(WeatherEntry wEntry)
       2: {
       3:     if (wEntry.SchemaVersion.Equals("1.0"))
       4:     {
       5:         /* If schema version is 1.0, update it to 1.5 
       6:          * and set a default value on SampleArea */
       7:         wEntry.SchemaVersion = "1.5";
       8:         wEntry.SampleArea = "Spata";
       9:     }
      10:     /* Put some try catch here to 
      11:      * catch concurrency exceptions */
      12:     this.UpdateObject(wEntry);
      13:     this.SaveChanges();
      14: }

    My schema now looks like this. Notice that both versions of my entities co-exist and are handled just fine by my application.


    Upgrading to version 2.0

    Upgrading to version 2.0 is now easy. All you have to do is change the default schema number when you create a new entity to version 2.0 and of course update your “UpdateEntry” helper method to check if version is 1.5 and update the value to 2.0.

       1: this.AddObject("WeatherData", new WeatherEntry { TimeOfCapture = timeofc, Temperature = temperature, SchemaVersion = "2.0" });


       1: public void UpdateEntry(WeatherEntry wEntry)
       2: {
       3:    if (wEntry.SchemaVersion.Equals("1.5"))
       4:    {
       5:        /* If schema is version 1.5 it already has a default
       6:         value, all we have to do is update schema version so 
       7:         our system won't ignore the default value */
       8:        wEntry.SchemaVersion = "2.0";
       9:    }
      10:    /* Put some try catch here to 
      11:     * catch concurrency exceptions */
      12:    this.UpdateObject(wEntry);
      13:    this.SaveChanges();
      14: }

    Whenever you retrieve a value from Table Storage, you have to check if it’s on version 2.0. If it is, you can safely use its SampleArea value which is not the default any more. That’s because schema version is changed when you actually call “UpdateEntry” which means you had the chance to change SampleArea to a non-default value. But if it’s on version 1.5 you have to ignore it or update it to a new, correct value.

    If you do want to use the default value anyway, you can create a temporary worker role which will scan the whole table and update all of your schema version numbers to 2.0.

    How about when you remove properties

    That’s a really easy modification. If you remove a property, you can use a SaveChangesOption called ReplaceOnUpdate during SaveChanges() which will override your entity with the new schema. Don’t forget to update your schema version number to something unique and put some checks into your application to avoid failures when trying to read non-existent properties due to newer schema version.

       1: this.SaveChanges(SaveChangesOptions.ReplaceOnUpdate);

    That’s all for today! Smile


    0 σχόλια
    Δημοσίευση στην κατηγορία: , ,

    In a previous post I’ve described what a VIP Swap is and how you can use it as an updating method to avoid service disruption. This particular method doesn’t apply to all possible scenarios and if not always, most of the times, during protocol updates or schema changes you’ll need to upgrade your service when its still running, chunk-by-chunk and without any downtime or disruption. By In-Place, I mean upgrades that take place during which both versions (old version and new version) are running side-by-side. In order to better understand the process below, you should read my “Upgrade domains” post in which there is a detailed description of what Upgrade domains are, how they affect your application, how you can configure the number of domains etc.

    To avoid service disruption and outage Windows Azure is upgrading your application domain per domain (upgrade domain that is). That will result in a particular state where your Upgrade Domain 0 (UD0) is running a newer version of your client/service/what_have_you and your UD1, UD2 etc will run an older version. The best approach is to have a two-step phase upgrade.

    Let’s call our old protocol version V1 and our new version V2. At this point, you should consider introducing a new client version called 1.5 which is a hybrid. What this version does is understanding both protocols used in both versions but always use protocol V1 by default and only respond by protocol V2 if they request is on V2. You can now start pushing your upgrades either by Service Management API or using Windows Azure Developer portal to completely automate the procedure. By the end of this process, you’ll achieve a seamless upgrade to your service without any disruption and all of your clients will upgrade to this hybrid. As soon as your first step is done and all of your domains are running version 1.5, you can proceed to step two (2).

    In your second step you’ll be repeating the same process but this time your version 2 clients will use protocol V2 by default. Remember, your 1.5 clients DO understand protocol V2 and they respond to it properly once called upon with. To make it simple, this time you’re deploying version 2 of your client which uses version 2 of your protocol only. Old legacy code for version 1 is removed completely. As your upgrade domains complete the second step you’ll be having all your roles using version 2 of your protocol, again without any service disruption or downtime.

    Schema changes have a similar approach but I’ll make a different post and actually put some code on it to demonstrate that behavior.

    0 σχόλια
    Δημοσίευση στην κατηγορία: , , ,

    Windows Azure automatically divides your role instances into some “logical” domains called upgrade domains. During upgrade, Azure is updating these domains one by one. This is a by design behavior to avoid nasty situations. Some of the last feature additions and enhancements on the platform was the ability to notify your role instances in case of “environment” changes, like adding or removing being most common. In such case, all your roles get a notification of this change. Imagine if you had 50 or 60 role instances, getting notified all at once and start doing various actions to react to this change. It will be a complete disaster for your service.

    Source: MSDN

    The way to address this problem is upgrade domains. As I said, during upgrade Windows Azure updates them one by one and only the associated role instances to a specific domain get notified of the changes taking place. Only a small number of your role instances will get notified, react and the rest will remain intact providing a seamless upgrade experience and no service disruption or downtime.

    Source: MSDN

    There is no control on how Windows Azure divides your instances and roles into upgrade domains. It’s a completely automated procedure and it’s being done on the background. There are two ways to perform an upgrade on a domain. Using Service Management API or the Windows Azure Developer portal. On the Developer Portal there are two more options. Automatic and manual. If you select automatic, Windows Azure will upgrade your domains without any hassle about what is going on. If you select manual, you’ll have to upgrade all of your domains one by one.

    This is some of the magic provided by Windows Azure operating system and Windows Azure platform to provide scalability, availability and high reliability for your service.


    0 σχόλια
    Δημοσίευση στην κατηγορία: , ,

    Today, during my presentation at Microsoft DevDays “Make Web not War” I had a pretty nice question about concurrency and I left the question somehow blurry and without a straight answer. Sorry, but we were changing subjects so fast that I missed it and I only realized it on my way back.

    The answer is yes, there is concurrency. If you examine a record on your table storage you’ll see that there is a Timestamp field or so called “ETag”. Windows Azure is using this field to apply optimistic concurrency on your data. When you retrieve a record from the database, change a value and then call “UpdateObject”, Windows Azure will check if timestamp field has the same value on your object as it does on the table and if it does it will update just fine. If it doesn’t, it means someone else changed it and you’ll get an Exception which you have to handle. One possible solution is retrieve the object again, update your values and push it back. The final approach to concurrency is absolutely up to the developer and varies between different types of applications.

    As I mentioned during my presentation, there a lot of different approaches to handle concurrency on Windows Azure Table Storage. There is a very nice video on the “How to” section on MSDN about Windows Azure Table Storage concurrency which can certainly give you some ideas.


    0 σχόλια
    Δημοσίευση στην κατηγορία: , ,
    Περισσότερες Δημοσιεύσεις « Προηγούμενη - Επόμενη »