Καλώς ορίσατε στο dotNETZone.gr - Σύνδεση | Εγγραφή | Βοήθεια

ARCast.TV - Extensible ERP with SingularLogic

Does anyone remember an old blog post about "The Architecture that Wasn't!", where I promised to write about an architecture that allows the creation of complex applications so easily that anyone who hasn't used it, doesn't believe it is possible? Even though this architecture has been succesful for over a decade? And then I forgot to write about it?
It seems that SingularLogic, arguably the largest software company in Greece has become a believer.

In his last Arcast, Ron Jacobs interviews Elias Theocharis of SingularLogic, who describes the new ERP platform they are building based on .NET, WCF and Workflow Foundation. The platform seems to be model- and workflow-driven, allowing implementers to easily create or customize business processes using workflows. Instead of providing its own customization/development environment, the platform integrates with Visual Studio using Guidance packages. This provides a familiar and powerful environment to developers and implementers although it makes it harder for consultants or customers to customize the application. Currently, SingularLogic is creating application components. Although it is not mentioned, I expect the components will contain the models and workflow definition for specific application domains.

During the interview, Theocharis mentions his concerns about WF's scalability (16:21). An ERP platform has to scale from a few concurrent users to several hundrend or even thousands of users. Well, Dimitris Panagiotakis and I can guarantee that workflow can scale - if you take care. Dimitris and I worked at K-NET Group in 2002 building just such a model driven platform. Dimitris built our workflow engine, I did the COM+ and database stuff. Our main customer was the Central Depository of Athens who wanted to automate its back-office processes. The system had to support 200 concurrent business transactions, so we had to make sure there were no bottlenecks. Well, we did OK as the system achieved 10,000 concurrent business transactions during test runs - right before the switch connecting the test clients and the servers started droping packets!

What we did was follow the scalability mantras as much as possible. Our platform stored the definition of domain objects in a configuration database and generated the stored procedures and view that mapped the objects to the application database. Apart from standard actions like Create, Edit, Delete, Print, Show Report, the developer could create new object actions using VBScript, which was also stored in the object's definition.

Here are some of the things we did to ensure the system was as scalable as possible

  • No direct modifications to the objects or the database were allowed. Everything had to be an action. This way a client only had to send the following data to the COM+ components: ObjectID, ActionID, action parameters. The processing component would then load the object and perform the action. Objects were kept in memory only as long as they had to.
  • We limited the use of used shared resources (e.g. memory, connections) as much as possible.
    • We used object pooling on our Data Layer component to limit the number of concurrent database connections (there was no way to put a limit to connection pooling back then). Although this may seem counterintuitive, fewer connections means fewer blocks and results in higher throughput.

    • We did NOT use in-memory caching! Again, this resulted in less contention for memory and allowed more clients to be serviced at the same time. Even though individual transactions were slower, we could service a lot more of them.

  • Workflow and object state was stored in the database, not in-memory. We relied on SQL Server 2000 to locate and update the state of each workflow. If there is one thing DBMSs are good at, it is finding data FAST. This could also allow us to scale out easily, although the 10,000 transaction mark was achieved using a single server only. An added benefit was that we could create business process reports very easily and very quickly.

  • Permissions and transformations were enforced at the database level. Each domain object was loaded using stored procedures that performed some complex joins between the object's tables, the permission tables and the view definitions. We relied on extensive indexing to make the process as fast as possible

  • We did NOT use pessimistic locking! ONLY optimistic locking was allowed! Instead ...

  • We used a two-level versioning system (major-minor versions) to ensure that each business process would work on its own copy of an object's data (minor version, e.g. 5.1), while allowing other clients to see the last published major version (e.g. 5.0). That also gave us long-running transactions, a really, really useful feature for business processes.

  • Transactions were initiated at the highest level COM+ component. In essence, a new transaction was created each time a workflow action called a COM+ component. Even though this was very coarse-grained as COM+ back then only supported the Serializable transaction level, it did save us from blocking calls between components.

  • If we had the time, we would have used MSMQ as well, moving to a message processing model. This would decouple the client and the workflow engine from the processing of requests domain object. This way we could also relax the Transaction-per-Call semantics.

How do the lessons of 2002 translate to a model driven platform with workflow in 2008?

  • Use message passing and action messages insteand of direct modification of objects. Not only is it more scalable, it allows asynchronous and offline processing, and allows the creation of a Service Oriented Architecture.
    It also allows us to use fewer threads to service the same number of clients and workflows.
  • Limit the connections to the database. Fewer concurrent connections mean fewer locks, fewer blocks, less delay in transaction processing.
  • Use the database for workflow tracking. I am not sure that dehydrating and rehydrating a workflow is such a good idea from a scalability perspective.
  • Use versioning or some other long transaction system to allow each business process to work on its own data.
  • NO PESSIMISTIC LOCKING!

.NET 3.5 already offers several of the components needed to create a model driven platform. The various ORMs offer a great solution for the Data Layer. The dynamic languages will allow us to create and modify object actions, in a fully type-safe way even on the customer's site (try to do type-safe with VBScript). WF+ WCF offer the basics for building the rest of the platform, although WCF is not as easy to use for message processing as it could be. Fortunately, there are some great open-source solutions to this.
One of the best is NServiceBus, created by Udi Dahan. It offers essential functionality right out of the box, like different messaging models, publisher/subscriber support and long running transactions.

Of course, those are only a few of the services a full application platform needs. In fact, this is a great subject for discussion, which is why the second meeting of the Hellenic Architecture User Group will be about executable models and model-driven development. Some folks that have built model-driven platforms in the past have already promised to come. The date is not yet fixed, so please, join the Facebook group and propose a convenient date!

Έχουν δημοσιευτεί Τρίτη, 25 Μαρτίου 2008 12:28 πμ από το μέλος Παναγιώτης Καναβός
Δημοσίευση στην κατηγορία: , , , ,

Ενημέρωση για Σχόλια

Αν θα θέλατε να λαμβάνετε ένα e-mail όταν γίνονται ανανεώσεις στο περιεχόμενο αυτής της δημοσίευσης, παρακαλούμε γίνετε συνδρομητής εδώ

Παραμείνετε ενήμεροι στα τελευταία σχόλια με την χρήση του αγαπημένου σας RSS Aggregator και συνδρομή στη Τροφοδοσία RSS με σχόλια

Σχόλια:

Χωρίς Σχόλια

Ποιά είναι η άποψή σας για την παραπάνω δημοσίευση;

(απαιτούμενο)
απαιτούμενο
(απαιτούμενο)
ÅéóÜãåôå ôïí êùäéêü:
CAPTCHA Image