Andreas

From LHEP Wiki
Jump to navigation Jump to search

Talks

Walk-through presentation (MiniBat, 16/11/2009) [1]

Bern Plans in 1 lepton channel (Susy ETmiss meeting, 14/01/2010) [[2]]

Analysis status (Bat, 25/2/2010) [[3]]

JE trigger studies (Susy Trigger Meeting, 23/3/2010) [4]

JE trigger studies (Jet Trigger Signature Group Meeting, 03/05/2010) [5]


Comments Michele to the walk-through Nov 09


Where are you in the plans on page 24 ?


Are you working in release 15 ?


Where in SVN is your analysis code ?


A first step is to get the signal and BG samples (list on p18 ?). Repeat yourself the steps as in page 7.


For all the bg samples make a table with cross sections, number of events before selection, after selection, ...


I agree to go for the 1-lepton selection as a first go. Think about doing e or mu, or both ? (see also your slide 13). This depends on who else will do a benchmark base exclusion/search analysis to be completed by next summer. Please find out. There is also the possibility to add b-tagging requirements. Is that an option ?


About the limit statistics: make a proposal of what you will use. CLs ? But do not spend too much time on this at first. This you can redo later if you have all the numbers. Applying a more sophisticated method. I would go with frequentist first (heavy-heartedly...)


Talk to Tobias about limits in SU4. Do you get the same estimates ?


Event selection. Will you start with the CSC 14 TeV default ? Or do you have a better proposal ?


It may be a good idea to simulate the analysis in cosmic data ? We do not expect to find anything, but it would be a nice exercise. Or on the top-mix samples ? Or on the first runs with 900 GeV collisions ? To get a _real_ dataset for the analysis will take us to late spring at least.


Trigger strategy for first data in the SUSY Group. Find out from Teresa what the strategy is. What triggers to use ? Are the jet tiggers filtering at HLT ?


Since the trigger is a significant part of your thesis: propose a trigger to be used. The "full thing" ! Trigger, L1, L2, HLT (?), rates (minbias), efficiencies (signal) estimated from turn-on (Tobias) and simulation, overlaps, comparison to other triggers. Validation in first data. I guess this could be a 3J_XXX. Focus on one first. Once you have an idea, keep Teresa and George informed. Talk to Tobias about code and possible studies of turn-ons on QCD samples. For the selection efficiencies on signal, remember that they are with respect to the offline selection.


How do you plan to apply the triggers in MC ? Thrhw away some events ? Weights ?


A study of the HLT in PT makes sense for a trigger study. But you don't care for the physics analysis. L1 alone, if possible for the rate and efficiency, is easiest. Likely possible for L1_3J10 or L1_3J20.


I would not plan on using triggers that may be prescaled, to start with. The prescale factor can change dramatically and in a non predictable way in first running. It will be more difficult to get the effective luminosiy.


How do you plan to apply data quality and trigger efficiency ? As an efficiency for your signal, or do you think to reduce to an effective luminosity ? Both are very viable. Think about it. Can you estimate if the 3J_10 could be safe to run unprescaled ? One issue is noise in the calorimeter. This could be evaluated in cosmics running (check the 3J_10 rate, or better search for 3J_10 triggers...). Also could be done from first collisions on a minimum bias sample anc check for 3J_10 trigger (also here not many... if everything is as expected).


The object definition. What are the definitions for first data in the SUSY group ? Do you plan to take the ones from the Jet/EM/Muon walk-thoughs ? Is there a study for 7 TeV ? I agree you should take whatever is available and not spend time in studies for now. But you need a starting point. Talk to Tommaso Lari ? Once you have an analysis, you can take another object definition and compare. And take the difference as a systematics.


One point I am sensible... The separation of object definition and event selection. For example jets: you will require jet-pt>50, nevertheless also a calorimeter object of 15 GeV is a jet. This affect the n-jet quantity in the event.


Make sure you event selection is exclusive (orthogonal) to possible other selections in other analyses. This is to make sure we could combine the results in an easier way.


Tobias is following the Freiburg selection. It may be a good idea to start from that too ?


Find out more about the delta-Phi(jet,met) cut. Is this to reject QCD ?


Start cut based. Then implement a likelihood if you have time.


You will need to use Met. But the values you look at are rather high (> 50 GeV). Please provide a Met resolution plot. Either expected, or from cosmics, or from first collisions. Feel free to ask Joel to help you.


Look at the estimates / numbers that the data drive BG groups provide. Do you feel that you can use them ? Just take the numbers ? I believe for QCD we will have to go the data driven way. W+jet and ttbar are better under control. Who can provide you with a reasonable QCD (multijet sample) from data ?


Who in the SUSY group will provide a good-runs-list ? Someone defined the the flags to be used ? When will this happen ? How often ? Daily ? Will the luminosity be provided ?


How is the bookkeeping done ? How are duplicated events removed ? How is made sure, that you processed all files corresponding to an LBN list ?


A lumiblock based DQ is applied. Also an event based ? (noise)


Are there plans to overlay minimum bias events from data to the MC (for noise, multiple interactions, etc...) ?


It would be nice to use some ensemble testing techniques. Which means doing many pseudo-experiments in MC to estimate the fluctuations / limits. this could be done at 1 fb-1 and then the expectations scaled down to 10-100 pb-1.