Καλώς ορίσατε στο dotNETZone.gr - Σύνδεση | Εγγραφή | Βοήθεια

ARCast.TV - Extensible ERP with SingularLogic

Does anyone remember an old blog post about "The Architecture that Wasn't!", where I promised to write about an architecture that allows the creation of complex applications so easily that anyone who hasn't used it, doesn't believe it is possible? Even though this architecture has been succesful for over a decade? And then I forgot to write about it?
It seems that SingularLogic, arguably the largest software company in Greece has become a believer.

In his last Arcast, Ron Jacobs interviews Elias Theocharis of SingularLogic, who describes the new ERP platform they are building based on .NET, WCF and Workflow Foundation. The platform seems to be model- and workflow-driven, allowing implementers to easily create or customize business processes using workflows. Instead of providing its own customization/development environment, the platform integrates with Visual Studio using Guidance packages. This provides a familiar and powerful environment to developers and implementers although it makes it harder for consultants or customers to customize the application. Currently, SingularLogic is creating application components. Although it is not mentioned, I expect the components will contain the models and workflow definition for specific application domains.

During the interview, Theocharis mentions his concerns about WF's scalability (16:21). An ERP platform has to scale from a few concurrent users to several hundrend or even thousands of users. Well, Dimitris Panagiotakis and I can guarantee that workflow can scale - if you take care. Dimitris and I worked at K-NET Group in 2002 building just such a model driven platform. Dimitris built our workflow engine, I did the COM+ and database stuff. Our main customer was the Central Depository of Athens who wanted to automate its back-office processes. The system had to support 200 concurrent business transactions, so we had to make sure there were no bottlenecks. Well, we did OK as the system achieved 10,000 concurrent business transactions during test runs - right before the switch connecting the test clients and the servers started droping packets!

What we did was follow the scalability mantras as much as possible. Our platform stored the definition of domain objects in a configuration database and generated the stored procedures and view that mapped the objects to the application database. Apart from standard actions like Create, Edit, Delete, Print, Show Report, the developer could create new object actions using VBScript, which was also stored in the object's definition.

Here are some of the things we did to ensure the system was as scalable as possible

  • No direct modifications to the objects or the database were allowed. Everything had to be an action. This way a client only had to send the following data to the COM+ components: ObjectID, ActionID, action parameters. The processing component would then load the object and perform the action. Objects were kept in memory only as long as they had to.
  • We limited the use of used shared resources (e.g. memory, connections) as much as possible.
    • We used object pooling on our Data Layer component to limit the number of concurrent database connections (there was no way to put a limit to connection pooling back then). Although this may seem counterintuitive, fewer connections means fewer blocks and results in higher throughput.

    • We did NOT use in-memory caching! Again, this resulted in less contention for memory and allowed more clients to be serviced at the same time. Even though individual transactions were slower, we could service a lot more of them.

  • Workflow and object state was stored in the database, not in-memory. We relied on SQL Server 2000 to locate and update the state of each workflow. If there is one thing DBMSs are good at, it is finding data FAST. This could also allow us to scale out easily, although the 10,000 transaction mark was achieved using a single server only. An added benefit was that we could create business process reports very easily and very quickly.

  • Permissions and transformations were enforced at the database level. Each domain object was loaded using stored procedures that performed some complex joins between the object's tables, the permission tables and the view definitions. We relied on extensive indexing to make the process as fast as possible

  • We did NOT use pessimistic locking! ONLY optimistic locking was allowed! Instead ...

  • We used a two-level versioning system (major-minor versions) to ensure that each business process would work on its own copy of an object's data (minor version, e.g. 5.1), while allowing other clients to see the last published major version (e.g. 5.0). That also gave us long-running transactions, a really, really useful feature for business processes.

  • Transactions were initiated at the highest level COM+ component. In essence, a new transaction was created each time a workflow action called a COM+ component. Even though this was very coarse-grained as COM+ back then only supported the Serializable transaction level, it did save us from blocking calls between components.

  • If we had the time, we would have used MSMQ as well, moving to a message processing model. This would decouple the client and the workflow engine from the processing of requests domain object. This way we could also relax the Transaction-per-Call semantics.

How do the lessons of 2002 translate to a model driven platform with workflow in 2008?

  • Use message passing and action messages insteand of direct modification of objects. Not only is it more scalable, it allows asynchronous and offline processing, and allows the creation of a Service Oriented Architecture.
    It also allows us to use fewer threads to service the same number of clients and workflows.
  • Limit the connections to the database. Fewer concurrent connections mean fewer locks, fewer blocks, less delay in transaction processing.
  • Use the database for workflow tracking. I am not sure that dehydrating and rehydrating a workflow is such a good idea from a scalability perspective.
  • Use versioning or some other long transaction system to allow each business process to work on its own data.
  • NO PESSIMISTIC LOCKING!

.NET 3.5 already offers several of the components needed to create a model driven platform. The various ORMs offer a great solution for the Data Layer. The dynamic languages will allow us to create and modify object actions, in a fully type-safe way even on the customer's site (try to do type-safe with VBScript). WF+ WCF offer the basics for building the rest of the platform, although WCF is not as easy to use for message processing as it could be. Fortunately, there are some great open-source solutions to this.
One of the best is NServiceBus, created by Udi Dahan. It offers essential functionality right out of the box, like different messaging models, publisher/subscriber support and long running transactions.

Of course, those are only a few of the services a full application platform needs. In fact, this is a great subject for discussion, which is why the second meeting of the Hellenic Architecture User Group will be about executable models and model-driven development. Some folks that have built model-driven platforms in the past have already promised to come. The date is not yet fixed, so please, join the Facebook group and propose a convenient date!

Έχουν δημοσιευτεί Τρίτη, 25 Μαρτίου 2008 12:28 πμ από το μέλος Παναγιώτης Καναβός
Δημοσίευση στην κατηγορία: , , , ,

Ενημέρωση για Σχόλια

Αν θα θέλατε να λαμβάνετε ένα e-mail όταν γίνονται ανανεώσεις στο περιεχόμενο αυτής της δημοσίευσης, παρακαλούμε γίνετε συνδρομητής εδώ

Παραμείνετε ενήμεροι στα τελευταία σχόλια με την χρήση του αγαπημένου σας RSS Aggregator και συνδρομή στη Τροφοδοσία RSS με σχόλια

Σχόλια:

# re: ARCast.TV - Extensible ERP with SingularLogic

It seems intesting project especially for the technology in 2002. But what confused me it was that you used no memory chaching. What was the reason of choosing a non memory solution, when windows could use lots of GB's and the price was not a matter?

I have also watched the arcast and I was curious that there was no post in DNZ. It's not  common to see greek companies to gain publicity or to become technical case studies. I think that it will be usefull to see much more companies or developers to talk for their solutions. Why not to start a Case Studies topic in DNZ

Τρίτη, 25 Μαρτίου 2008 10:07 μμ by sakalis

# re: ARCast.TV - Extensible ERP with SingularLogic

It has been more than a decade now that I am working on the implementation of a vision where business analysts can simply compose the new or improved business processes they need from a set of reusable business components, after which some run-time execution Business Process Management (BPM) engine will end up invoking the appropriate reusable SOA-based services to execute those processes.

My effort started in 1997 at UMIST, Manchester, where I did my MSc with Dr Vassilis Karakostas and carried on with my PhD in the same university. At the time I addressed the issue as “Process driven Enterprise Integration Framework” – hence the title of the thesis - and developed a framework that used the Suppliers Working Group Taxonomy (Goranson, H. T., 1992, Dimensions of Enterprise Integration, (ICEIMT)) as a baseline that sees enterprise integration from three domain modeling perspectives:

• Business Process Characterization

• Business object characterization

• Execution environment

It was not until few years later when I met Panagiotis Kanavos and Dimitris Panagiotakis in the same university and even later that I worked with them in K-NET Group designing and implementing the first, I believe, model driven platform in Greece.

“How do the lessons of 2002 translate to a model driven platform with workflow in 2008?”

To my concern it is clearly a matter of architecture:

In <a href="www.itbs.gr">itbs</> we follow a BPM-to-SOA approach that involves a layered approach, with business components in a BPM layer connecting via an enterprise services bus down to one or more service components in a SOA layer. At the SOA layer, some of these service components call other service components, while the most granular call the necessary implementations, which are typically wrapped legacy applications, either custom developed or off-the-shelf.

Once you have a set of such components, BPM will allow you to recompose them into new processes, and SOA will allow you to deploy and execute them on an enterprise service bus.

This is what BPM actually does and indeed makes them potentially very powerful enablers. But until we figure out the complete set of those reusable business components the full potentials of the BPM-to-SOA approach will remain in the realm of legend.

Τετάρτη, 26 Μαρτίου 2008 1:50 μμ by Panos Papavassiliou

# re: ARCast.TV - Extensible ERP with SingularLogic

Και σαφώς τα χαιρετίσματα μου!

Και ξέχασα να επισημάνω τον Δημήτρη Παλαιοκώστα ο οποίος ολοκλήρωσε τη πλατφόρμα για το Κεντρικό Αποθετήριο (ΚΑΑ)

Τετάρτη, 26 Μαρτίου 2008 2:11 μμ by Panos Papavassiliou

# re: ARCast.TV - Extensible ERP with SingularLogic

 sakalis, back then, 2 GB was what we thought as unlimited memory! In any case, shared resources always limit scalability. Even if we assumed truly unlimited memory and instantaneous read times, there will be contention among the readers and writers of the cache.

 In a web application there are a lot more reads than writes so a cache can increase speed a lot while causing little contention. This results in greater throughput. It is also easier to synchronize the cache among many web servers, as the cached data rarely changes.

 In an Business Process or ERP platform, there are many more writes which means that cache data gets stale much faster and there are more contentions. There is less cacheable data too. Most of the data processed is volatile object data that are only updated by a single client at a time.

 While it makes sense to cache that data on the client, caching it on the server wouldn't help as the cached data would stay idle for a long time (speaking in computer time). The cached data would have to wait several seconds for a request from its client.

 Even with reference data that change once a month or once a week, like lookup values or interest rate, you can have concurrency issues. When should the clients see a change in interest rates? Right away? The next day? Even for "read-only" data you may need to add a synchronization mechanism.

 Depending on the architecture you can cache the model and process definitions. In SingularLogic's case this is fairly easy as the definitions are created by the programmers and do not change at the customer's site. In K-NET's case the model and schema were completely dynamic and modifiable even by the customer through the administration UI. In this case you can still use caching if you take care to invalidate the definitions as soon as they change.    

 Back then, we had no idea that the platform could scale so well, so we tried to avoid scalability issues as much as possible. We had seen previous client-server attempts fail miserably as soon as 40-50 clients tried to connect to the database.  And, we had to complete the platform and a 200+ business process project in less than a year.

Today, we would probably add a configurable cache so that we could turn it on or off according to the system's needs and performance. ASP.NET and SQL Server provide a good mechanism for both caching and invalidating stale data. ADO.NET Sync Service also provides a way to cache reference data to the clients, perhaps even to offsite servers. Perhaps we could use ADO.NET sync services to create a P2P platform that would sync even model and process definitions, not just lookup data.

Of course, we will discuss all that in the next Architecture Group meeting!

Πέμπτη, 27 Μαρτίου 2008 9:10 πμ by Παναγιώτης Καναβός

Ποιά είναι η άποψή σας για την παραπάνω δημοσίευση;

(απαιτούμενο) 
απαιτούμενο 
(απαιτούμενο) 
ÅéóÜãåôå ôïí êùäéêü:
CAPTCHA Image