Pi/Mu Separation

From LHEP Wiki
Revision as of 12:41, 19 August 2008 by Lhep (talk | contribs) (→‎MC related stuff)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

PSI/CERN exposure in July 2007: Pi/Mu testbeam

General Informations

- CERN exposure:

- Purpose: To provide intercalibrationtracks in the corners of the brick
- 10 GeV Pi-
- +50 mrad, +150 mrad
- about 4x3000 particles triggered (2 spots x 2 angles)

- PSI exposure:

- Purpose: Provide unbiased emulsion data for Pi/Mu separation analysis
- Mu- and Pi+ with angles of 0...-300 mrad (about 5000 particles)
- p_mu = 125 MeV/c at muE1 area at PSI
- p_pi = 148 MeV/c at pM1 area at PSI

- Brick setup: 3e + 35 x (pb + e)

Bricks

Brick Particle Angle (mrad) Location Comment
b1 Muon -300 Lyon
b2 Muon -100 Bern Scanning on Mic2 now
b3 Muon 0 Bern Scanning is finished
b4 Muon -200 Bern
b5 Pion 0 Lyon
b6 Pion -300 Lyon
b7 Pion 0 Bern Scanning is finished
b8 Pion -100 Bern next... after b2
b9 Pion -200 Bern no marks

Produce Predictions

This is not a full description, it is meant as an overview and guideline.

- all the scrips should be found at /terabig/scan/Jonas/PiMu

- use scanlarge_test.C in the ONLINE folder to check the quality and settings before scanning
- use scanlargePiMu.C to scan the full emulsion surface (10.5cm x 8cm = 84 cm^2). It produces 12 subareas.
- produce the old fedra format structure (data, par, report) for each subarea.
- align all the subaras (use default.par_forAlignment): use scanlib.C, AlignPlate1(), 3x recset -a -lnk.def
- track the subareas(use default.par_forTracking)
- merge the linked_tracks.root files
- open linked_tracks.root file, define good cut and use link2cp.C to produce a cp-file (link2cp("mycut", 3) - 3 is necessary in order to project to the correct Z distance of 0: 1 is -600 Z distance)
- use cp2txt.C to produce predictions (pred.txt)
- make folder pred in your main folder of the brick, put pred.txt into this folder.
- use split.sh to split the pred.txt into several files (adapt acording to the amount of predictions you have)
- use dopred() in the script utils.C (adapt accordingly to the amout of prediction you have)

Scanback

- The scanback script is adapted to process more than 100 predictions. A lot of prediction slowed down the scanning too much.

- IMPORTANT: do intercalibration with scanback() and scanback with scanback_batch() in sb5.C. In order to copy the AFF transformation for all the prediction files, the cal cannot be done with the scanback_batch() because it will overwrite the copied AFF file.

- Merge SBT: use in the sb5.C script the function mergesbt(<version>). adapt first the .operation.h file accordingly: example: merge from plate 4-7, put START_PLATE=4 and TO_PLATE=7.

Analysis

- Adapt line in WriteStopping of sb5.C "if(rtsb.ePred.ID()==p->ID() && rtsb.eStatus==0 && rtsb.eIdp[1]>p->PID())" with > instead of < (start from plate 1 instead of 57)

- Stopping points are written. Merge txt files manually and add first line to create automatically the root plots:

N/I:X/F:Y/F:TX/F:TY/F:S/I\n
0 19988.351562 11970.712891 0.065430 -0.061815 36
1 22626.093750 13754.257812 -0.249425 0.065311 36
- TTree PV   
- PV.ReadFile("b000007.stopping_points_Tree_all.txt")
- PV.Draw("S","","")

- Chain sbt-files: use ChainSBT.C script - adapt accordingly

MC related stuff

File:Pdgidlist.png
PDG ID list
  • Check actual number of low energy in the OPERA bricks
  • read file
MCfile = new TFile("filename")
TreeKB = (TTree*)MCfile->Get("TreeKB")