This paper appears in the Proceedings of ICCC 76, Toronto, pp 37-43.

RCP, the Experimental Packet-Switched Data Transmission Service of the French PTT:
History, Connections, Control

A. Bache, L. Guillou, H. Layec, B. Long and Y. Matras

Centre Commun d'Etudes de Télévision et Télécommunications, France

ABSTRACT

In France, the operation of the experimental packet-switching network RCP is preparing the way for the installation of the public packet-switched service for data transmission known as TRANSPAC and due to be inaugurated in 1978.

This paper begins with a historical description of the RCP project, pointing out the main phases, the main decisions taken, both technical and political, and the evolution of the context.

Then a full description of RCP as it appears at the beginning of 1976 is given, followed by a study of the main connections to the network and the experience they have provided.

The control center of the network is then described. This control center is a minicomputer connected to the network and spying on it in many different ways.

In conclusion, a view is given of the future activities of the RCP team.

HISTORY AND TODAY'S CONFIGURATION

In 1971, the French PTT Administration decided to develop an experimental packet-switching network. The project was named RCP (Réseau à Commutation par Paquets: Packet-Switching Network) and four persons were assigned to it.

One of the main goals was to prove the feasibility of a public data transmission service based on packet switching. That technique had already been tested in the ARPA network and the RCP team began its work by a careful study of ARPANET, including conversations with its designers; contact was also made with representatives of the British Post Office who were tackling the same problems.

The first technical decision made was to select, as a minicomputer on which to experiment, DEC's PDP 11, because of its architecture and its off-the-shelf line interfaces, and also because no such hardware existed in France at that time.

A PDP 11/20 was then installed in Paris in December 1971 and the team began at once to develop software tools (mainly an operating system for the PDP and a PDP assembler on a GE 635). These tools were refined while being used to design a one-node network supporting only asynchronous terminals. Meanwhile, the synchronous procedure was being specified.

This period also saw the birth of the CYCLADES network, an IRIA (Institut de Recherche en Informatique et Automatique under the control of the Ministry of Industry) project. Profitable talks with the CYCLADES team enabled the new and rival concepts of virtual circuit and datagram to be clarified. Tentatives were made to merge the two projects, in order to have only one experimental packet-switching network in France: IRIA would specify high level protocols in collaboration with the CII (Compagnie Internationale pour l'Informatique - The main French computer manufacturer) while the PTT would implement the data transmission network (using the MITRA 15, minicomputer that was being developed by CII, but not yet put on the market). It was eventually decided to go on separately

In the middle of 1972, a new research center, CCETT (Centre Commun d'Etudes de Télévision et Télécommunications), controlled partly by the PTT, was installed in Rennes and the RCP team was transferred there.

A second PDP 11 (model 21) was delivered in November 1972 and the implementation of the synchronous procedure went on together with the programming of the first automatic call procedure on the switched telex network.

In about July 1973, the PDP's were connected by two synchronous leased lines operating at 4800 b/s. In addition, a T 1600 minicomputer build by the French manufacturer Télémécanique, was delivered to Rennes to become the Control Center of RCP.

Four months later, a new impetus was given to the project when the official announcement was made of the decision by the PTT Administration to set up a national public packet-switched service named TRANSPAC.

Three persons were then added to the original team and it was decided to re-program the whole system while retaining the basic principles (see appendix). The operating system was entirely rewritten and the telex procedure revised. The feature of automatically calling on the telex network was no longer included because it would give to experimental users free access to the telex network.

The hardware was upgraded and the network was extended to three nodes in January 1974 (a PDP 11/40). RCP then officially became a test bed as much for the Administration as for the users and it was decided that the experiment would end one year after the actual opening of TRANSPAC. For a further year the team continued to work, mainly on host computer connections (the first of which began to be studied in mid-74) and on improvements of the whole system (mainly concerning the PDP's line interfaces and central storage size, between November 1974 and March 1975). At the beginning of 1975 the network was operational eight hours a day.

As far as asynchronous accesses were concerned, a collaboration with SAT (Société Anonyme des Télécommunications, French manufacturer of transmission equipments), which started at the end of 1973, permitted the development of a specialized adapter dealing, directly on the PDP 11 Unibus, with a sequence of information emitted by a remote time-division multiplexor TELSAT 3729.

The first mock up was tested in July 1974 and at the end of that year, the network was offering that type of asynchronous access in Lille, Marseille, Bordeaux and Paris. A fifth multiplexor was installed in Rennes in October 1975.

Figures 2a), 2b), 2c) indicate the evolution of the network up to its configuration in March 1976. Table 1 brings together information about accesses, lines and modems.

Appendix gives the basic principles of RCP. For more detailed information see reference [1].

CONNECTIONS

The first connection of a host computer to RCP was undertaken in about mid-74, and became operational at the beginning of 1975. Télésystèmes, the company owning the computer (an XDS 940) has been able since then to offer access to its time-sharing system via RCP. The linkage was effected by means of a MITRA 15 mini-computer placed between RCP and the XDS 940, operating the RCP procedure on one side and the access procedure to the XDS 940, which was developed specially for the occasion by Télésystèmes, on the other.

Another connection of the same sort was set up a little later, with a Honeywell 6080 belonging to the Administration, the software for the link being implemented on the DATANET 355, the standard front-end machine for the H6080. Access to the time-sharing system of the 6080 is provided via RCP since the end of 1975.

Study of a more complex type of linkage was begun towards the end of 1975. The idea was to construct, for EDF (Electricité de France, Public Company that controls gas and electricity production transport and distribution in France), a network made of two fundamental elements:

- Terminal stations acting as concentrators for terminals of various complexities.

- Central stations, more powerful computers, acting as front-end machines for several data processing computers, and linked to several terminal stations.

To establish these inter-station connections, EDF uses the Public network in the following way: virtual circuits are considered practically as leased lines, managed from end to end, i.e. from station to station by protocols which duplicate to a certain extent those of the network. EDF thus will use for its private network the means of transmission of the public network.

A roughly similar joint study with the company PHILIPS FRANCE is also under way. The end in view is to link together several computing centers and to provide access to them via the network. The machines to be linked up are mainly Philips machines, in particular mini-computers of the P 800 series as front-end machines.

Another type of connection has been recently set up, with the help of CAP-SOGETI (French software development Company), for a very different purpose. The machine in question is an English mini-computer (Computer Technology Limited's Modular 1), which is to be used for the teaching of telecomputing. Special software has been written and "hung on" to the operating system. It is composed of "services", which can talk to each other by means of a monitor-working on the virtual circuit principle. Amongst the services working at the moment are to be found:

- The connection to RCP.

- Access to the Mod. 1 time-sharing system via RCP.

- The Mod. 1's remote-batch system on a large computer via RCP.

We will develop in rather greater detail three other connections:

The FLORE project (Frontal de Liaison entre un Ordinateur et un Réseau: Front-end link between a Computer and a Network).

This project, started at the end of 1975, concerns the connection of the CII IRIS 80 (Twin-processor; 256K words of 32 bits) of the CCETT to a packet-switching network: RCP as a first stage, TRANSPAC when it is put into service. The services of the computer offered to the network's subscribers are to be as wide as possible: time-sharing, remote batch processing, data bases, etc... It should be possible to experiment with new applications developed around the network: inter--computer links, file transfers, shared data bases etc...

The Operating System of the IRIS 80, named TRANSIRIS, is capable in its standard form of supporting remote stations known as "multifunction stations" (SMF), since they can operate several remote batch posts simultaneously and concentrate the traffic of low-speed interactive terminals.

So as not to change TRANSIRIS and to avoid uselessly loading up the core store, it was decided to transform a multifunction station into a genuine front-end machine (FLORE) for the IRIS 80, linked to the RCP network, and simultaneously to study the problem of transforming another multifunction station into a concentrator station (SC), also connected to the network. This transformation, illustrated by the figure below, is made easier and more efficient since the management of the SMF is based in TRANSIRIS on the use of virtual circuits: a virtual circuit is reserved for each physical or virtual device connected to the CPU, and a special channel (channel 0) is reserved for passing connection and disconnection commands.

The principle of the connection is all virtual communications with the processor should pass via the front-end machine. The communications arrive: either from interactive terminals directly connected to the front-end machine ( Local TS), or from interactive terminals directly connected to the network (Network TS), or from SC station (See figure above). The front-end machine makes the right protocol conversions on the virtual channels. The pacing and acknowledgement of reception services are also converted and do not appear as another level of protocol.

The exchanges between the SC stations and FLORE are paced data exchanges, identical to those between the IRIS 80 and the SMF stations, but passing via the network and its switching functions.

For the network TS terminals FLORE must manage the virtual device protocol of TRANSIRIS just as for the local TS terminals. In order to develop new applications, it is intended to extend the software to be able to gain access to the data-base management system and to the user programs running in supervisor mode.

Experimental Connection Of IBM Equipments

Purpose - The main objective of this experiment was to study the connection of synchronous terminals to a packet switching network.

If the user has a large number of terminals in a given place which have to be connected to the network, he can use a concentrator. This first case can be assimilated to that of connecting computers, and a multichannel procedure like the one defined in RCP can be used. However, if the user has only one terminal to connect, a monochannel (i.e. non-multiplexed) procedure can be used. Two such procedures were studied and tried out.

A terminal can be attached to a network via two logical units:

-A modem which provides a normalized transmission interface (CCITT V.24 in the experiment).

-A Data Circuit Equipment (DCE) which handles the whole network access protocol.

This second logical unit can be either programmed inside the terminal itself (integrated DCE) or realized in a separate hardwired unit (separated DCE). Both integrated and separated DCE's were used during the experiment. Everything developed during this study by IBM or by CCETT is considered as experimental and did not commit either party in any way.

Configuration - The configuration used can be described as follows (See adjacent figure): An IBM 370/145 computer was connected to Lyon's RCP node via a 3705 (which was running in 2701 emulation mode) and a special front-end unit (SFU) in which the RCP multichannel procedure had been implemented; the 3705 and SFU communicated using a BSC procedure.

An IBM 3271 terminal was connected to RCP via a separated DCE which handled the monochannel access procedure. This monochannel access procedure was also developed in the SFU to simulate an integrated DCE. The 3271 and SFU were linked in this case via a BSC procedure.

All this equipment was located in the LA GAUDE IBM research center. The monochannel procedure was delivered by a network front-end MITRA 15 connected, on the other side, to Rennes' RCP node, using a multichannel procedure, via a 9600 b/s link. The MITRA 15 software was written using the LCSP principles (See next section).


Monochannel procedure - The monochannel procedure makes it possible for a synchronous unit to communicate with one other unit at a time.

It provides the following functions:

- Transparent full duplex data exchange with message-structuring possibilities.

- Pacing/controlling the network's inbound and outbound How in accordance with available resources.

- Handling silences.

- Handling communications.

Two monochannel procedures were developed:

- The first one was known as "without error check" and did not perform error checks and recoveries on user's data. However signalling information was protected by a parity bit. This connection method has the following advantages:
. DCE simplicity.
. Negligible overload envelope.

- The other was known as "with error check" in which any frame is protected by a CRC. The retransmission mechanism is the same as the one used by the multichannel RCP procedure.

Data circuit equipment - The DCE is the complementary equipment between the terminal and the normalized transmission interface delivered by the modem.

In separated DCE mode, a physical connecting interface, INCAS (INterface de Communication Asservie Synchrone i.e. Synchronous Paced Communication Interface), needs to be used. The objective of this interface is to provide an easy method of terminal connection with minor (or, if possible, no) modifications inside the terminal.

INCAS Interface is compatible with CCITT V Series recommendations except for 114 and 115 circuits which run in "isochronous burst" mode for pacing between Terminal and DCE. This allows a transmission speed between the DCE and the terminal higher than the synchronous link speed. {CCITT V24 circuits 114 & 115 are transmit clock (pin 15) and receive clock (pin 17) respectively. Changing the frequency of these signals changes the rate at the which data is transferred. I suspect fixed speeds were used which were higher than normal modem data rates. /RDM}

The following controls and indicators are provided on the DCE machine-man dialogue.
- Call button (for hot-line calls).
- Terminal ready light.
- Communication established light.
- Incoming call light.
The DCE was also provided in the experiment with a keyboard which allowed dialling.

Some conclusions - BSC messages were exchanged between the 3271 and the 370 via the RCP network. By comparing with direct connection, it was observed that response times were increased, especially when using "polling/addressing" working mode which is therefore not very well adapted to packet switching transmissions.

The realization in MITRA 15 of the RCP multichannel and monochannel procedures gives a good idea of the cost and complexity of such software attachments:

4K bytes forRCPmultichannel procedure.
2,7"""with error check monochannel procedure.
2,4"""without error check monochannel procedure.

It was also checked that the separated DCE attachment was possible without modifications to the 3271 terminal; however it is not possible to say whether it would be the same with another terminal.

The Logical Channels Switching Program

In the RCP network, each node supports both synchronous and asynchronous accesses from customers but the limited memory available on the PDP 11 does not permit the implementation of more than one synchronous multichannel link control procedure. It is obvious however that the range of experiments would be enlarged by the possibility of offering various procedures for connections with host computers, synchronous terminals or other networks.

To solve that problem, and to avoid disturbing continuous operation of RCP, it was decided at the beginning of 1975 to develop, outside of the nodes software simulating a front-end processor for the network. This software was specially designed to allow the quick implementation of any kind of link control procedure, as long as it is based on virtual circuit service. The kernel of this software is the logical channels switching program (LCSP) the purpose of which is to run like a mini access method to a packet switching network for every other piece of software. Thus, any program (either a network access procedure or application software) is always considered as a logical link which may consist of one or several logical channels.

The two functions of this Monitor program are:

- To set up or clear a communication between two logical channels.

- To transmit data between two logical channels when such a communication is set up.

The data flows in both ways through a FIFO queue of internal packets. The length of theses queues are limited to permit data flow control (the maximum length of the queue is computed when setting up the communication depending on the packet length used on each logical channel and on the data transfer rates requested.

A message priority system allows a high priority packet to destroy previously entered ones in the queue. Another mechanism allows some information to by-pass the queues in order to implement some facilities as data interrupts on logical channels, different levels of data, etc...

Around the LCSP are now running the RCP multichannel and single channel procedures, the draft X.25 procedure developed in common with TCTS, Telenet and the U.K. Post Office, and several application softwares. The LCSP was used for the connections EDF, IBM and FLORE (see above). In the Modular One connection, the monitor is based on similar principles.

THE CONTROL CENTER

The minicomputer T 1600 supporting the Control Center, delivered in July 1973, was the first computer to be connected to the network. It has been operating on the network since mid-75. It is only linked to a single node (The Rennes node in standard operation) but, if this node should cease to function, it can be shifted to another node in a few seconds. The equipment of the minicomputer consists of 24 K words of core store (16 bit words) a 106 byte disc, a medium speed line printer, two typewriters, and a line interface for 9600 b/s in synchronous operation.

{editor's note: R. Després gives a disc size of 10 megabytes rather than the 1E6 bytes given here. The 10E6 figure is slightly more plausible but the T1600 was sold with a disc as small as 192 KB. /RDM}

The three most important functions of the control center are:

- Overseeing the correct operation of the network.

- Gathering data on the activity of lines, nodes and customers.

- Offering services by means of a virtual host process, answering calls from the customers through the network.

Each node of RCP gathers data concerning its own operation and the lines connected to it.

In the control computer a process dedicated to each node is permanently in contact with it on a virtual circuit that is automatically kept in use while the node is alive. The data collected in this way are used to update central tables depicting the current state of all the devices and customers of the network.

For each node the load in computing power as well as in allocated memory is recorded. For the lines between nodes or from nodes to customers the traffic and states are recorded and the error rate is computed in real time.

These central records are then used to make up a log on a print out device chosen by the operator, or to display warnings by means of bells or special patterns of lights on the operating desk that can sum up at a glance the state of the most important parts of the network. These central data are also used to provide information to customers.

Many services have been developed that can be "dialled up" by customers of the network. Some provide help for the debugging of newly connected computers when implementing virtual circuits facilities. Others display real-time information about the state of the network and in particular a map of the network indicating the elements operation. Still others, controlling data bases, display information such as: a network operation manual, a list of subscribers, the method of access to computers supplying services on the network etc...

At regular intervals, statistics gathered by the control computer are stored on a disc file after some processing. The frequency of these "dumps" is automatically computed according to the size of the file and the number of hours devoted to this activity, these two parameters being fixed by the operator. When the end of the period is reached, the file is transferred to the CCETT computing center (IRIS 80) while a new file is opened. In normal operation, the storage available is sufficient to store) week-long statistical record. It must be noted that the file transfer is done through RCP's virtual circuit mechanism. Summaries of the statistics are automatically pointed out at hours prescribed by the operator.

In conclusion, this control center has been designed for operating with a large autonomy. Nevertheless an operator can take control at any time to modify parameters, alter the configuration of the computer, request special processing of statistics or input a message that will be delivered to all customers when they are next connected to the network.

CONCLUSION

Three phases can be distinguished in the RCP project:

The elaboration phase which came to an end at the beginning of 1975, the operation phase which started at the beginning of 1976 and is expected to last until the beginning of 1979, and between them an intermediate one.

The first phase was useful in three ways:

- Firstly in validating a certain number of technical choices, in order to provide favorable conditions for the team which was to draw up the specifications of TRANSPAC, and to give it a firm base to work on;

- Secondly in providing experience with the first synchronous connections so that eventually genuine help could be offered to customers wanting to be connected to the system;

- Finally in allowing the development of the Control Center so that an operational tool was available almost at the start of the second phase.

During the intermediate phase, that is to say the year 1975, the RCP team was engaged in two activities:

- Studying and putting into service certain synchronous connections, as we have seen above,

- Completely operating the network in all its aspects so as to acquire some solid experience on the subject.

The third phase will comprise several fields of action:

- Firstly the network and its entire operation is to be transferred to a service of the PTT dedicated to that type of function. As can easily been imagined, the problem is not simple.

- Connection of host computers will continue to be studied and set up, and it should be noted that the new requests that have been received recently will extend the field of hardware connected, since they are for ICL and CDC computers.

- In parallel with the connection of computers, that of "intelligent" terminals will also be studied.

- Lastly, and still with a view to adapting to TRANSPAC studies have been started of the services which should be offered by such a network; for example interprocess communication.

We will end with the fact that the number of synchronous entries to RCP has become insufficient compared with the number of requests, and the network is to be extended to four nodes. On another front, and also at the request of users, the possibility of gaining access to the network using the TRANSPAC protocol will be offered.

TABLE 1

SYNCHRONOUS ACCESS TO THE NETWORK

synchronous access (DP11, DOS11)
synchronous
line
interfaces
internode
links
9600b/s
access for customers
existingin use:
MARCH
1974
13671
MARCH
1975
2712154
MARCH
1976
27121511

ASYNCHRONOUS ACCESS TO THE NETWORK

asynchronous access (DH11, TDM, SAT)
existingin use :
autoanswering(110,300,1200b/s)
totaltelephone
(100,300b/s)
telex
(50b/s)
leased
line
local
line
MARCH
1974
24104402
MARCH
1975
128844515321
MARCH
1976
1561014515635

DISTRIBUTION OF ACCESS IN MARCH 1976

ON THE PDP 11' s
Synchronous line interfaces (SLI): 27
PLR
Program interrupt SLI : DP 11 544
Non processor request dual SLI : DQS11 223

Asynchronous line interfaces (ALI): 80 PLR
Multiplexor for 16 lines : DH11 212

P = Paris L = Lyon R = Rennes

ON THE SAT 3729's: 76

Rennes and Paris: each TDM is equipped for 20 lines.

Marseille, Bordeaux and Lille: each TDM is equipped for 12 lines.

Network Configurations

APPENDIX

THE BASIC PRINCIPLES OF RCP

The synchronous procedure is characterized by the separation of the frame level and the packet level. A frame may contain several packets and the recovery of transmission errors is performed at the frame level. The packets contained in an acknowledged frame can no longer be rejected. This implies preventive flow control. There are three types of packets:

- Ready to receive (PR)

- Data (D)

- Service (I = initialization; M= end of messages;
P = ready for message).

A synchronous line is shared in logical channels, and a virtual circuit is a concatenation of such logical channels. Resource allocation (Routing and buffer storage) is done when the communication is set up. This choice simplifies buffer management and is advantageous for small networks.

Another important specification is that of variable length packets. Data are transmitted in the form of a sequence of bytes separated by the "end of message" markers. It is neither necessary nor guaranteed that packets should be the same length on arrival as at their departure. Only the order of the bytes and the positions of the "end of message" markers are preserved. The flow control is done at the byte level. In addition, the synchronous procedure is characterized by a command level.

Logical channel 0 carries the following commands:

A for the call.

C for the confirmation of the connection.

L for the clearing.

F for the end of tranmission.

E to reply to unrecognizable commands.

The logical channels are divided into incoming and outgoing channels, to avoid the possibility of collision of calls. This has an effect on the automata which make and break the virtual circuits. While confirmation of the call is sent from the callee to the caller, liberation is done by logical channel along the virtual circuit, being careful to distinguish between an outgoing channel which needs confirmation and an incoming channel which is immediately liberated.

REFERENCES

[1] R. Després, RCP the experimental packet-switched data transmission service of the French PTT, ICCC 1974, Stockholm.

A. Bache: received the degree of Engineer in electro-technology from the Ecole Nationale Supérieure d'lnformatique, d'Electronique, d'Electrotechnique et d'Hydraulique dc Toulouse, France, in 1961. From 1963 to 1964, he worked on reliability measures for an electronic component firm; in 1965, he joined the Centre National d'Etude des Télécommunications where he gained experience in computer software engineering (mainly on compilers, operating systems and real time systems). In 1971, he was the software project manager for the first French experiment on packet switching networks (RCP). In 1973, he came to the CCETT, Rennes, to develop a new release of the RCP software and to participate in the specifications of TRANSPAC and other related teleprocessing problems.


L.C. Guillou: was born at Henvic, France, on July 12, 1947. He graduated from the Ecole des Arts et Manufactures Paris, in 1970, and received the Docteur Ingénieur degree in electronics for telecommunications (wave propagation and antenna) from the University of Rennes, in 1973. Since 1973, he has been employed by Télédiffusion de France in the Centre Commun d'Etudes de Télévision et de Télécommunications in Rennes. He played a prominent part in the installation of the RCP network. His major interest is in packet-switched data transmission.

H. Layec: received the degree of Engineer in Data Processing from the Ecole Supérieure d'Electricité, University of Paris, in 1972. From 1972 to 1974, he was a teaching assistant at the Institute Universitario de Tecnologia, Caracas, Venezuela. He joined the CCETT in 1974 since when he has been engaged in software developments on packet switching networks.

B. Lorig: received the degree of Engineer in telecommunications, from the Ecole Rationale Supérieure des Télécommunications, Paris, France, in 1970, and the degree of Doctorat-ès-Sciences from the University of Rennes, France, in 1973. From 1970 to 1974, he was employed by the Antenna Laboratory, University of Rennes, where he worked on numerical analysis for electromagnetics and antennas. He joined the CCETT in 1974, and turned his activity toward packet switching networks.

Y.A. Matras: began his professional life in 1966 at the computing Center of the French Atomic Energy Commission. He taught computer science for two years at the Faculte des Sciences d'Orsay, near Paris, and spent three years as a visiting assistant professor of mathematics in two American Universities. He was hired in 1973 by the French PIT and appointed as Director of the Computing Center of the Centre Commun d'Etudes de Télévision et Télécommunications in Rennes, France. In early 1975, he became head of the Département RSI (Réseaux de Systemes Informatiques) in the same organization. Mr Matras has got a PhD (Doctorat ès Sciences) in mathematics from the University of Paris.