Distributed computing

Distributed computing means that a program is split up into parts that run simultaneously on multiple computers communicating over a network. Despite the definition is clear, what features a programming language supporting distributed computing should provide is highly nebulous and opinions differ widely. Therefore, we will first define the aims of Arplan to this respect and the features it should have to "help" building such distributed applications.

 


Transparent networking

 

It should be as easy to send a message to a remote server as to any other local entity. Or, in another perspective, it should be as easy to call a remote function/service than a local one. At first sight, the result can be a meaningful answer, an error or a timeout, among others. This can be generalized to the fact that there can be multiple return types. As a comment, this implies that a throw/catch mechanism is not necessary. An error type is simply one of the possible results.

 

 

Messaging/calling multiplicity

 

In classical programming, everything follows one sequential "path". You call something (a method for example) and receive the result. However, this is not sufficient/efficient in distributed environment. Indeed, there are two important aspects: communication takes time and links or nodes may be dead. Therefore, naive sending and waiting, one by one, is inefficient. Instead, it should be natural to send n messages (or n service calls) and treat the responses in the order of arrival.

 


Concurrency issues

 

With concurrent processes (and therefore also distributed systems) coordination of tasks and ressource access is a big issue. Let's take the classical example of the bank account.

 

The task is simple: you have to check the balance in a first time, then, if funds are sufficient withdraw the requested amount. When multiple processes do this at the same time, we must ensure that funds were not withdrawn by another concurrent process between the time we checked the balance and withdraw the amount.

 

With transactions involving many proccesses and ressources, handling such concurency issues can quickly become very complex. Sadly, there is no miraculous solution to avoid this complexity. The fight for ressources will always exist. Ways to deal with them are:

- locks

- monitors

- tuple space

- optimitic transactional memory

Each having some advantages and disadvantages.