More details on the ALICE Data Analysis

The goal of the Offline project is to develop and maintain a software, capable of storing, reconstructing and analyzing the billions bits of data that are generated by the ALICE detector every second. 

The Offline project is responsible for the design and operation of a software framework called AliRoot, employing modern pattern recognition technologies to assemble the physics events from the detector data deposited by the p-p and heavy ion collisions. AliRoot is also used by the thousand of ALICE physicists to analyze on daily basis the reconstructed data and prepare the physics publications. The framework 
consists of approximately 5 million lines of code, written mostly in C++. 

The Offline project is also developing and operating a sophisticated processing software, called AliEn, used to store and analyze the experiment's data.  It manages about 50 Petabytes of disk and tape storage and 80 thousand CPUs. The storage and CPUs are distributed on over 90 national big and small computing centres worldwide, which form the backbone of the offline machinery for all LHC experiments - the  Worldwide LHC Computing Grid. 

To develop and support the software, hundreds of young physicists and  software engineers from the institutes and universities participating in ALICE join forces, each taking responsibility for a part of the framework. The Offline project role is to assure that the pieces fit perfectly and the software is ready for use by all. 

The ALICE software is a 'living organism', constantly adapting to the varying conditions in the LHC accelerator and the detectors as well as incorporating new tracking and analysis techniques. It is put to a rigorous test every day to assure that it is bug-free and the physics results are correct. 

Read more details on the ALICE Offline system here

The ALICE offline calibration and alignment framework will provide the infrastructure for the storage and the access to the experiment condition data. These include most non-event data and notably the calibration and alignment information used during reconstruction and analysis. Its design is primarily driven by the necessity for a seamless access to a coherent set of calibration and alignment information in a model where data and processing are distributed worldwide over many independent computing centres.


The Off-line Project is steered by an Off-line Computing Board. Every Project (detector) formally commits to name one or two full time Off-line co-ordinators, depending on the size of the Project. These are intended to be medium term nominations (of the order of 3 years), with a substantial overlap between one person and its successor.

Members of the Computing Board will be also one representative of the Physics Board and of the Off-line Project, representatives of the Regional Centres, when those will be established, and lead developers of the major software packages.

ALICE has recently joined the Monarc project and its computing centre policy is being defined. Representatives from major computing centres will be completed as soon as those interested are identified.

Main tasks of the Board will be

  • Plan development of the Off-line
  • Ensure communication with Detectors
  • Define and review milestones
  • Define and monitor progress of workpackages

While the CERN Off-line group developed the global infra-structure, the representatives of the different projects concentrated on the software of the project, making sure that it all fits in a coherent ensemble. In particular the tasks of the representatives have been:

  1. Physically manage the insertion and update of the code into the central repository
  2. Participate in the planning of the global Off-line Project. In particular participate in defining goals and milestones that are realistic and relevant to the collaboration
  3. Find and coordinate the necessary resources within their project to meet the agreed milestones
  4. Interface Off-line activities with the other activities in the project, such as test beams, DAQ and detector design
  5. Provide front line support for Off-line users in the project, filtering trivial problems and reporting the others to the Off-line project
  6. Maintain and develop the project-specific Off-line code, In particular this means providing adequate documentation for the code and a test suite.

The ROOT framework provides a set of interacting classes and an environment for the development of software packages for event generation, detector simulation, event reconstruction, data acquisition, and a complete data analysis framework including all PAW features. An essential component of ROOT is the I/O subsystem that allows one to store and retrieve C++ objects and is optimized for efficient access to very large quantities of data. ROOT has evolved over the years to become a mature and complete framework, embodying a very large spectrum of features. It is outside the scope of this document to provide a full description of ROOT. In the following paragraphs we shall limit ourselves to outlining the major features that are relevant for the ALICE computing framework.

ROOT is written in C++ and offers integrated I/O with class schema evolution, an efficient hierarchical object store with a complete set of object containers, a C++ interpreter allowing one to use C++ as scripting language, advanced statistical analysis tools (multidimensional histogramming, several commonly used mathematical functions, random number generators, multi-parametric fit, minimization procedures, cluster finding algorithms etc.), hyperized documentation tools, geometrical modelling, and advanced visualization tools. The user interacts with ROOT via a graphical user interface, the command line or script files in C++, which can be either interpreted or dynamically compiled and linked.

Moreover, ROOT presents a coherent and well integrated set of classes that easily inter-operate via an object bus provided by the interpreter dictionaries (these provide extensive Run Time Type Information, RTTI, of each object active in the system). This makes ROOT an ideal basic infrastructure on which an experiment’s complete data handling chain can be built: from DAQ, using the client/server and shared memory classes, to database, distributed analysis, thanks to the PROOF facility, and data presentation.

Finally, the ROOT system has  been successfuly interfaced with emerging Grid middleware in general. A first instantiation of this interface was with the ALICE-developed AliEn system. In conjunction with the PROOF system, this has allowed the realization and demonstration of a distributed parallel computing system for large-scale production and analysis. This interface is currently being extended to other middleware systems as these become available.

GEANT is the name of a series of simulation software designed to describe the passage of elementary particles through matter, using Monte Carlo methods. The name stands for "GEometry ANd Tracking" and the very first version of GEANT dates back to 1974. 

Geant3 is a software package for High Energy Physics (HEP) studies  allowing to simulate large number of particles and how they interact with matter. It has been developed by an international collaboration of which CERN is a leading member. It is capable of simulating all particle types that are important for the study of heavy-ion collisions: gammas, electrons, positrons, protons, neutrons, and ions while it also includes models for all important physics interactions: electromagnetic, hadronic, photolepton-hadron and optical.

One of Geant4’s most prominent features is its ability to handle complex geometries, as it was designed from the outset to handle the incredibly complex geometries of the detectors of the LHC experiments. Geant4, offers the most flexible geometry modeller of any particle and radiation transport tool and this important feature is fully exploited in ALICE.  

Another important feature of Geant4 is that it is the first particle transport tool coded in C++, a modern Object Oriented programming language. This has allowed Geant4 to be extended easily with additional geometry shapes and specialised physics models from many different authors, and this has been achieved without creating an overhead in its computational performance.