Καλώς ορίσατε στο dotNETZone.gr - Σύνδεση | Εγγραφή | Βοήθεια

Παρουσίαση με Ετικέτες

Όλες οι Ετικέτε... » Cloud   (RSS)
A nice change I noticed today is that VMDepot images are now visible in the Windows Azure Portal. If you to Virtual Machines You’ll see an option that says “Browse VMDepot” If you click it, you get the list of the image already in the VM Depot:   You can select one and create a virtual [...]
0 σχόλια
Δημοσίευση στην κατηγορία: , ,
VM Depot allows users to build, deploy and share their favorite Linux configuration, create custom open source stacks, work with others and build new architectures for the cloud that leverage the openness and flexibility of the Azure platform. How it works All images shared via this catalog are open for interaction where other users of [...]
0 σχόλια
Δημοσίευση στην κατηγορία: , ,
What is Windows AzureConf On November 14, 2012, Microsoft will be hosting Windows AzureConf, a free event for the Windows Azure community. This event will feature a keynote presentation by Scott Guthrie, along with numerous sessions executed by Windows Azure community members. Streamed live for an online audience on Channel 9, the event will allow [...]
0 σχόλια
Δημοσίευση στην κατηγορία: , ,
It’s a short post but worth it! It’s live, Windows Azure WebSites are now running with support for .NET Framework 4.5. As this is an in-place upgrade from 4.0 to 4.5 don’t expect anything breaking or things not working but make sure you test your stuff before you upload. Everything targeting 4.0 should still work as [...]
0 σχόλια
Δημοσίευση στην κατηγορία: , ,
I accessed the portal today and a nice surprise was waiting for me. Among other improvements that I either didn’t spot yet or they are not visible to us (backend changes) they were two new very welcome changes: 1) Service bus can now be managed from the new portal! Under the “App Services” category, you can now [...]
0 σχόλια
Δημοσίευση στην κατηγορία: , ,
Windows Azure Web Sites (WAWS) is a powerful hosting platform provided by Microsoft on Windows Azure. One of the coolest features is that you can run your Web Sites/Web Apps pretty easily and it’s not limited to ASP.NET but it also supports PHP. Also on database level it’s not only SQL databases (MSSQL) that are [...]
0 σχόλια
Δημοσίευση στην κατηγορία: , ,
I was testing Windows Azure Web Sites (Codename Antares, for short WAWS) for about 3 weeks before it was announced two days ago at MeetWindowsAzure. Since the beginning I wanted … Continue reading →
0 σχόλια
Δημοσίευση στην κατηγορία: ,
Windows Azure Trust Center was released to the public and it contains valuable information about security and privacy on the platform. You can find details about what kind of certificates the platform holds and generally why and how the platform is secure, handles data and respects privacy. All that information applies to Windows Azure core services … Continue reading »
0 σχόλια
Δημοσίευση στην κατηγορία: , ,
There is a common error sometimes when you try to start and debug a web site/web app on the Windows Azure Emulator that it’s caused for a couple of reasons that I will explain below. The error message is “There was an error attaching the debuger to the IIS worker process for URL <THE_URL> …”. Reason … Continue reading »
0 σχόλια
Δημοσίευση στην κατηγορία: , ,
Continuing on where I left it on my previous post, I’m going to explain how the Announcement service works and why we choose that approach. The way JBoss and mod_proxy work now is that every time something changes in the topology, either a new proxy is added or removed or a JBoss node, then the proxy [...]
0 σχόλια
Δημοσίευση στην κατηγορία: , , , ,
I’m going to start a series of posts to explain how we made JBoss run on Windows Azure, not just on standalone mode but with full cluster support. Let me start with one simple definition, I’m NOT a Java guy, but I work with some very talented people under the same roof and under the [...]
0 σχόλια
Δημοσίευση στην κατηγορία: , , ,
Today Windows Azure SDK 1.5 and Windows Azure AppFabric SDK 1.5 were released, fixing issues and bugs detected during the beta. There are also some new enhancements to it: Re-architected emulator, which enables higher fidelity between local and cloud developments. Support for uploading service certificates in csupload.exe. A new csencrypt.exe tool to manage remote desktop [...]
0 σχόλια
Δημοσίευση στην κατηγορία: , ,

I posted an article at CodeProject explaining how you can use Windows Azure AppFabric Cache (CTP) in your applications.

You can find the article here -> http://www.codeproject.com/KB/azure/WA-AppFabric-cache.aspx

Please let me know if you liked or not and of course any comments are more than welcome!

Thank you,

PK.

0 σχόλια
Δημοσίευση στην κατηγορία: ,

A lot of interesting things have been going on lately on the Windows Azure MVP list and I'll be try to pick the best and the ones I can share and make some posts.

During an Azure bootcamp another fellow Windows Azure MVP, had a very interesting question "What happens if someone is updating the BLOB and a request come in for that BLOB to serve it?"

The answer came from Steve Marx pretty quickly and I'm just quoting his email:

"The bottom line is that a client should never receive corrupt data due to changing content.  This is true both from blob storage directly and from the CDN.
 
The way this works is:
·         Changes to block blobs (put blob, put block list) are atomic, in that there’s never a blob that has only partial new content.
·         Reading a blob all at once is atomic, in that we don’t respond with data that’s a mix of new and old content.
·         When reading a blob with range requests, each request is atomic, but you could always end up with corrupt data if you request different ranges at different times and stitch them together.  Using ETags (or If-Unmodified-Since) should protect you from this.  (Requests after the content changed would fail with “condition not met,” and you’d know to start over.)
 
Only the last point is particularly relevant for the CDN, and it reads from blob storage and sends to clients in ways that obey the same HTTP semantics (so ETags and If-Unmodified-Since work).
 
For a client to end up with corrupt data, it would have to be behaving badly… i.e., requesting data in chunks but not using HTTP headers to guarantee it’s still reading the same blob.  I think this would be a rare situation.  (Browsers, media players, etc. should all do this properly.)
 
Of course, updates to a blob don’t mean the content is immediately changed in the CDN, so it’s certainly possible to get old data due to caching.  It should just never be corrupt data due to mixing old and new content."

So, as you see from Steve's reply, there is no chance to get corrupt data, unlike other vendors, only old data.

PK.

0 σχόλια
Δημοσίευση στην κατηγορία: ,

Amazon announced the AWS Free Usage Tier (http://aws.amazon.com/free/) last week, which will start from November the 1st. I know some people are excited about this announcement and so am I because I believe that competition between cloud providers always brings better service for the customer, but in Amazon's case, it's more like a marketing trick than a real benefit and I'll explain why during this post. Let me remind you at this point that this is strictly a personal opinion. Let me also say that I have experience on AWS too.

Certainly, having something free to start with is always nice, but what exactly is free and how does it compare to Windows Azure platform? First of all, Windows Azure has a similar free startup offer, called Introductory special which gives you free compute hours, storage space and transactions, a SQL Azure web instance, AppFabric connections and ACL transactions, free traffic (inbound and outbound), all at some limit of course. Then there is the BizSpark program, which gives you also a very generous package of Windows Azure Platform benefits to start developing on and of course let's not forget the MSDN Subscription Windows Azure offer which is even more buffed up than the others.

Ok, I promised the Amazon part, so here it is. AWS billing model is different from Windows Azure. It's very detailed, a lot of things are broken into smaller pieces, each one of them being billed in a different way. Some facts:

  • Load balancing in EC2 instances, it's not free. Not only you pay compute hours but you're also charged for traffic (GB) that went through your balancer. Windows Azure load balancing is just there and it just works and of course you don't pay compute hours and traffic just for that.
  • On EBS you're charged for every read and write you do (I/O), charged for the amount of space you use, snapshot size counts not in the total but on its own and you're also charged per snapshot operation (Get or Put). On Windows Azure Storage you have 2 things. Transactions and amount of space you consume. Also on snapshots only your DELTA (differences) is counted against your total, not the whole snapshot.
  • SimpleDB is charged per machine hour* consumed and GBs of storage. Windows Azure Tables you only pay your storage and transactions. You might say that I have to compare this to S3, but I don't agree. S3 is not close to Windows Azure Tables as SimpleDB is. What is even more disturbing on S3 is the fact that there is a durability guarantee of 99.99% which actually means you can lose (!!) data of 0.01%.
  • There is no RDS instance (based on MySQL) included in the free tier. With introductory special you get a SQL Azure Web database (1GB) for 3 months or for as long as you have a valid subscription when you're using the MSDN Windows Azure Offer where you actually get 3 databases.

For me, the biggest difference is the development experience. Windows Azure offers a precise local emulation of the cloud environment on your development machine, called DevFabric which ships with Windows Azure Tools for VS2008/VS2010. All you have to do, is click F5 on your Cloud project and you get the local emulation on your machine, to test, debug and prepare for the deployment. Amazon doesn't offer this kind of development environment. There is integration with Eclipse and other IDEs but every time you hit the Debug button, you're actually hitting real web services with your credentials, consuming from your free tier and as soon as you're done consuming that you start paying to develop and debug. Free tier is more like a "development tier" for me. Windows Azure offers you both, the development experience you expect without any cost on your local machine with DevFabric and a development experience on the real cloud environment where you can deploy and test  your application also without any cost, unless of course you consume your free allowance.

Some may say you can't compare AWS to Windows Azure, because they are not the same. AWS is mostly IaaS (Infrastructure as a Service) and Windows Azure is PaaS (Platform as a Service) and I couldn't agree more. But what I'm comparing here are features that already exist on both services. I'm not comparing EC2 instances sizes to Windows Azure instances sizes but I'm comparing the Load Balancing, SimpleDB etc.

* Machine hour is a different concept to compute hour and it's beyond the scope of this post.

Thank you for reading and please feel free to comment.


PK.

0 σχόλια
Δημοσίευση στην κατηγορία: , ,


Recently, I’ve been looking a way to persist the status of an idling Workflow on WF4. There is a way to use SQL Azure to achieve this, after modifying the scripts because they contain unsupported T-SQL commands, but it’s totally an overkill to use it just to persist WF information, if you’re not using the RDBMS for another reason.

I decided to modify the FilePersistence.cs of the Custom Persistence Service sample in WF 4 Samples Library and make it work with Windows Azure Blob storage. I’ve created two new methods to Serialize and Deserialize information to/from Blob storage.

Here is some code:

   1: private void SerialiazeToAzureStorage(byte[] workflowBytes, Guid id)
   2:      {
   3:          var account = CloudStorageAccount.FromConfigurationSetting("DataConnectionString");
   4:          var container = account.CreateCloudBlobClient().GetContainerReference("workflow_persistence");
   5:  
   6:          var blob = container.GetBlobReference(id.ToString());
   7:          
   8:          blob.Properties.ContentType = "application/octet-stream";
   9:          using (var stream = new MemoryStream())
  10:          {
  11:              stream.Read(workflowBytes, 0, workflowBytes.Length);
  12:              blob.UploadFromStream(stream);
  13:          }
  14:      }
  15:  
  16:      private byte[] DeserialiazeFromAzureStorage(Guid id)
  17:      {
  18:          var account = CloudStorageAccount.FromConfigurationSetting("DataConnectionString");
  19:          var container = account.CreateCloudBlobClient().GetContainerReference("workflow_persistence");
  20:  
  21:          var blob = container.GetBlobReference(id.ToString());
  22:  
  23:          return blob.DownloadByteArray();
  24:      }

Just make sure you’ve created “workflow_persistence” blob container before using these methods.

PK.

0 σχόλια
Δημοσίευση στην κατηγορία: , ,


Once again, Microsoft proved that it values its customers, either big enterprise or small startups. We’re a small private-held business and I personally have a major role in it as I’m one of the founders. Recently, I’ve been giving some pretty nice presentations and a bunch of sessions for Microsoft Hellas about Windows Azure and Cloud computing in general.

I was using my CTP account(s) I have since PDC 08 and I had a lot of services running there from times to times all for demo purposes. But with the 2nd commercial launch wave, Greece was included and I had to upgrade my subscription and start paying for it. I was ok with that, because MSDN Premium subscription has 750 hours included/month, SQL Azure databases and other stuff included for free. I went through the upgrade process from CTP to Paid, everything went smoothly and there I was waiting for my CTP account to switch on read-only mode and eventually “fade away”. So, during that process, I did a small mistake. I miscalculated my instances running. I actually missed some. That turned out to be a mistake that will cost me some serious money for show-case/marketing/demoing projects running on Windows Azure.

About two weeks ago, I had an epiphany during the day and I was like “Oh, crap.. Did I turned that project off? How many instance do I have running?”. I logged on the billing portal and, sadly for me, I was charged like 4500 hours because of the forgotten instances and my miscalculation. You see, I’ve did a demo about switch between instance sizes and I had some instances running like big VMs. That’s four (4) times the price per hour.

It was clearly my mistake and I had to pay for it (literally!). But then I tweeted my bad luck to help others avoid the same mistake and the thing I was been warning my clients all this time and some people from Microsoft got interested in my situation, I explained what happened and we ended up in a pretty good deal just 3 days after I tweeted. But, that was an exception and certainly DON’T count on it.

Bottom line is be careful and plan correctly. Mistakes do happen but the more careful we are, the more rare they will be.

* I want to publicly say thank you to anyone who was involved in this and helped me sort things out so quickly.

PK.

0 σχόλια
Δημοσίευση στην κατηγορία: , ,


Paspartu is French for “one size fits all”. Recently I’ve been coming across posts explaining and “promoting” the idea of spawning threads inside a worker role each one of them with a unique work to be done. All are sharing the same idea and all of them are describing the same thing.

The idea

You have some work to do, but you want to do it with the most efficient way, without having underutilized resources, which is one of the benefits of cloud computing anyway.

The implementation

You have a worker process (Worker Role on Windows Azure) which processes some data. Certainly that’s a good implementation but it’s not a best practice. Most of the time, your instance will be underutilized, unless your doing some CPU and memory intensive work and you have a continuous flow of data to be processed.

In another implementation, we created a Master-Slave pattern. A master distributes work to other slave worker roles, roles are picking up their work, do their stuff, return result and start over again. Still, in some cases that’s not the best idea either. Same cons as before. Underutilized resources, high risk of failure. If the master dies, unless properly designed, your system dies. You can’t process any data.

So, another one appeared. Inside a worker role, spawn multiple threads, running their own processes or methods, doing their work and return result. Underutilization is minimized, Thread Pool is doing all the hard work for us and as soon as .NET 4.0 is supported on Windows Azure, parallelization is easy and, allow me to say, mandatory. But what happens if the worker instance dies? Or restarts? Yes, your guess is correct. You lose all threads and all the processing done by that moment, is lost, unless you persist it somehow. If you had multiple instances of your worker role to imitate that behavior, that wouldn’t happen. You’ll only lose data from the instance that died.

As Eugenio Pace says “You have to be prepared to fail” and he’s right. Every single moment, your instance can die, without a single notice and you have to be prepared to deal with it.

Oh, boy.

So really, there is no single solution or best practice. For me, it’s best guidance. Depending on your scenario, one of the solutions above or even a new one, can fit better for you than for others. Every project is unique and has to be treated as such. Try to think out of the box and remember that this is deep water for everyone. It is just some of us swim better..

PK.

0 σχόλια
Δημοσίευση στην κατηγορία: , ,

 

Every single time something new emerges in the IT market, there are three distinct categories of people: Early adopters, Late majority, Skeptics. Each one of them has its own reasons and its own concerns about when, why and if they are going to adapt to this new technology, either completely or in a hybrid mode.

All of them have some things in common. They share the same concerns about security, billing, taxes, availability, latency and probably some others.

Concerns and my thoughts.

Billing is something that can easily be solved and explained by clever and good marketing. Unfortunately, there is no such thing as local billing. Cloud computing services are truly global and the same billing model, the same prices, the same billing methods have to be used to provide a fair and consistent service to every customer. This has to change. For some markets, a service can be really cheap, but for some others can be really expensive. Increasing the price in some countries and decreasing in some others can make the service more fair and more adoptable. Using a credit card to identify the country is a good method, but there is a problem. It’s called taxes.

Taxes is a way for a government to make money. In many countries, Greece being one of them, having a credit card with a decent limit it’s a privilege. Unfortunately I mean it in a bad way. Interest is quite high and with such an unstable tax policy you can never be sure if there won’t be any extra fees you might have to pay sooner or later. But I guess this is not only our problem but for some other countries too, mostly emerging markets. Providing another way of paying monthly fees for service usage, can easily overcome this.

Security. Oh yes, security. Countless questions during presentations and chats are about security. Tons of “what if”. Yes, it’s a big issue. But too much skepticism is never good. I believe people are not worried about security issues like data leakage/stolen/etc. There are worried because they somehow lose control of their information. At least this is what they believe. The idea that their data are not stored in their own hardware but somewhere else and not even in the same country, terrifies them. I’m not sure if there is anything that can be done to subdue this concern but at least, there can be some localized data centers for example, banks were regulatory laws demand data to be stored in the same country, if not on-premises owned by the bank. Private cloud could probably meet those regulations.

Latency. That’s an easy one. Its principal is the same as security. My data are over there and there might be a significant latency until I get a response. Yes there is a delay no it’s not that big, probably somewhere between 60 to 100 ms. For applications that are not real time, this is really really low. You can even play shoot’em’up games with 100ms latency. The only thing we can do, is have a requirement for a decent DSL line from our customers in case our application, locally installed, is accessing a cloud service. Also picking the right region to deploy our application can have a significant impact on latency.

Availability. People are worried about their data not being available when most needed. The further their data are, the more points of failure. Their internet line, their ISP line, a ship cutting some cables 4000km away. Most, if not all, cloud service providers provide 3 or 4 “nines” of uptime and availability, but there are a lot of examples of services failing from unpredicted code or human errors (eg Google). Other companies have proved more trustworthy and more reliable.

Conclusion

Concluding this post, I want to make something clear. I’m not part of those distinct groups of people. I started playing with cloud computing services right after Amazon removed beta label from its AWS service, back in 2008 (April if I recall correct), with Windows Azure following at PDC ‘08. I had my first token back then and started playing with it. I’ve seen Windows Azure shape and change within those two years in something amazing and really ground breaking. Windows Azure can successfully lower or even eliminate your concerns in some of the matters discussed above, but there is room from improvement and always will be. I’m going to dig a little deeper on those matters and try to provide more concrete answers and thoughts.

Thank you for reading so far,

PK.

0 σχόλια
Δημοσίευση στην κατηγορία: ,

 

Yes, it is.

Table storage has multiple replicas and guarantees uptime and availability, but for business continuity reasons you have to be protected from possible failures on your application. Business logic errors can harm the integrity of your data and all Windows Azure Storage replicas will be harmed too. You have to be protected from those scenarios and having a backup plan is necessary.

There is a really nice project available on Codeplex for this purpose: http://tablestoragebackup.codeplex.com/

PK.

0 σχόλια
Δημοσίευση στην κατηγορία: ,


Just three days ago on Feb 13th, SQL Azure got an update. Long requested features like downgrade and upgrade between Web and Enterprise Edition is finally implemented. It’s easy, just with a single command to switch between versions. Also some DMVs were introduced to match on premise SQL Server. Idle session timeout was also increased.

In details*:

Troubleshooting and Supportability DMVs

Dynamic Management Views (DMVs) return state information that can be used to monitor the health of a database, diagnose problems, and tune performance. These views are similar to the ones that already exist in the on-premises edition of SQL Server.

The DMVs we have added are as follows:

· sys.dm_exec_connections – This view returns information about the connections established to your database.

· sys.dm_exec_requests – This view returns information about each request that executes within your database

· sys.dm_exec_sessions – This view shows information about all active user connections and internal tasks.

· sys.dm_tran_database_transactions – This view returns information about transactions at the database level.

· sys.dm_tran_active_transactions – This view returns information about transactions for your current logical database.

· sys.dm_db_partition_stats – This view returns page and row-count information for every partition in the current database.

Ability to move between editions

One of the most requested features was the ability to move up and down between a Web or Business edition database.   This provides you greater flexibility and if you approach the upper limits of the Web edition database, you can easily upgrade with a single command. You can also downgrade if your database is below the allowed size limit.

You can now do that using the following syntax:

ALTER DATABASE database_name

{

    MODIFY (MAXSIZE = {1 | 10} GB)

}

Idle session timeouts

We have increased the idle connection timeout from 5 to 30 minutes. This will improve your experience while using connection pooling and other interactive tools

Long running transactions

Based on customer feedback, we have improved our algorithm for terminating long running transactions. These changes will substantially increase the quality of service and allow you to import and export much larger amounts of data without having to resort to breaking your data down into chunks.

* Source: MSDN Forums announcement

Provide your feedback on http://www.mygreatsqlazureidea.com. There are some great features requested like the ability to automatically store a backup on BLOB storage.

PK.

0 σχόλια
Δημοσίευση στην κατηγορία: , ,


Recently at MSDN Forums there were people asking how they can detect if their web application is running on the cloud or locally (Dev Storage). Well besides the obvious part, if you have code inside a Web Role or a Worker Role Start() method, this only exists on a cloud template but what if you want to make that check somewhere else, for example inside a Page_Load method or inside a library (dll)?

If you’re trying to detect it on the “UI” level, let’s say Page_Load, you can simply check your headers. Request.Headers["Host"] will do the trick. If it’s “localhost” or whatever you like it to be to can be used to determine if it’s running local.

But how about a Library? Are there any alternatives?

Well, it’s not the most bullet proof method, but it served me well until now and I don’t think it’s going to stop working as it’s a fundamental architecture element of Windows Azure. There are specific Environment properties that are raising a SecurityException as you’re not allowed to read them. One of them is MachineName. So, if Environment.MachineName is raising an exception then you’re probably running on the cloud. As I said it’s not bullet proof because if an IT administrator applies a CAS that restricts specific properties, it can still raise an exception but you get my point. A combination of tricks can give you the desired result.

PK.

0 σχόλια
Δημοσίευση στην κατηγορία: , ,


You’re trying to create a queue on Windows Azure and you’re getting a “400 Bad Request” as an inner exception. Well, there are two possible scenarios:

1) The name of the queue is not valid. It has to be a valid DNS Name to be accepted by the service.

2) The service is down or something went wrong and you just have to re-try, so implementing a re-try logic in your service when initializing is not a bad idea. I might say it’s mandatory.

The naming rules

  • A queue name must start with a letter or number, and may contain only letters, numbers, and the dash (-) character.
  • The first and last letters in the queue name must be alphanumeric. The dash (-) character may not be the first or last letter.
  • All letters in a queue name must be lowercase.
  • A queue name must be from 3 through 63 characters long.

    More on that here.

  • Thank you,
    PK.

    0 σχόλια
    Δημοσίευση στην κατηγορία: ,


    One of the latest features introduced on SQL Azure is the abillity to apply firewall settings on your database and allow only specific IP ranges to connect to it. This can be done through SQL Azure Portal or through code using stored procedures.

    If you want to take a look at which rules are active on your SQL Azure database, you can use:

    select * from sys.firewall_rules

    That will give you a view of your firewall rules.

    If you want to add a new firewall rule, you can use the "sp_set_firewall_rule". The syntax is "sp_set_firewall_rule <firewall_rule_name> <ip range start> <ip range end>". For example:

    exec sp_set_firewall_rule N'My setting','192.168.0.15','192.168.0.30'


    If you want to delete that rule, you can use:

    exec sp_delete_firewall_rule N'My setting'


    PK.

    0 σχόλια
    Δημοσίευση στην κατηγορία: , ,


    When you have your service running on Windows Azure, the least thing you want is monitoring every now and then and decide if there is a necessity for specific actions based on your monitoring data. You want the service to be, in some degree, self-manageable and decide on its own what the necessary actions should take place to satisfy a monitoring alert. In this post, I’m not going to use Service Management API to increase or decrease the number of instances, instead I’m going to log a warning, but in a future post I’m going to use it in combination with this logging message, so consider this as a series of posts with this being the first one.

    The most common scenario is dynamically increase or decrease VM instances to be able to process more messages as our Queues are getting filled up. You have to create your own “logic”, a decision mechanism if you like, which will execute some steps and bring the service to a state that satisfies your condition because there is no out-of-the-box solution from Windows Azure. A number of companies have announced that their monitoring/health software is going to support Windows Azure. You can find more information about that if you search the internet, or visit the Windows Azure Portal under Partners section.

    In the code below I’m monitoring the messages inside a Queue at every role cycle:

       1: CloudQueue cloudQueue = cloudQueueClient.GetQueueReference("calculateP");
       2:  
       3: cloudQueue.CreateIfNotExist();
       4: cloudQueue.FetchAttributes();
       5:  
       6: /* Call this method to calculate your WorkLoad */
       7: CalculateWorkLoad(cloudQueue.ApproximateMessageCount);

    and this is the code inside CalculateWorkLoad:

       1: public void CalculateWorkLoad(int? messages)
       2: {
       3:   /* If there are messages, find the average of messages 
       4:      available every X seconds
       5:   X = the ThreadSleep time, in my case every 5 seconds */
       6:   if (messages != null)
       7:    average = messages.Value / (threadsleep / 1000);
       8:  
       9:   DecideIncDecOfInstances(average);
      10: }

    Note that if you want to get accurate values on queue’s properties, you have to call FetchAttributes();

    There is nothing fancy in my code I’m just finding an average workload (number of messages in my Queue) every 5 seconds and I’m passing this value at DecideIncDecOfInstances(). Here is the code:

       1: public void DecideIncDecOfInstances(int average)
       2: {
       3:     int instances = 2;
       4:  
       5:     /* If my average is above 1000 */
       6:     if (average > 1000)
       7:         OneForEveryThousand(average, ref instances);
       8:         WarnWeNeedMoreVM(instances);
       9: }

    OneForEveryThousand count is actually increasing the default number of instances, which is two (2), by one (1) for every thousand (1000) messages in Queue’s average count.

    This is the final part of my code, WarnWeNeedMoreVM which logs our need for more or less VM’s.

       1: public void WarnWeNeedMoreVM(int instances)
       2: {
       3:     if (instances == 2) return;
       4:  
       5:     Trace.WriteLine(String.Format("WARNING: Instances Count should be {0} on this {1} Role!", 
       6:                    instances, RoleEnvironment.CurrentRoleInstance.Role.Name), "Information");
       7: }

    In my next post for this series, I’m going to use the newly released Service Management API to upload a new configuration file which increases or decreases the number of VM instances in my role(s) dynamically. Stay tunned!

    PK.

    0 σχόλια
    Δημοσίευση στην κατηγορία: , ,
    Περισσότερες Δημοσιεύσεις Επόμενη »