1 00:00:00,089 --> 00:00:08,729 Chair marva and we move to the corresponding CMS. Talk by Joe swan. Tomas 2 00:00:08,729 --> 00:00:09,299 will skirt. 3 00:00:17,640 --> 00:00:20,040 Okay, can you see the slides? 4 00:00:20,340 --> 00:00:22,380 Yeah, but not in full screen. 5 00:00:24,059 --> 00:00:27,299 Is that better? Yeah. Thanks. Go ahead. 6 00:00:29,070 --> 00:00:34,770 Okay, so, yeah, I'm on your show, Thomas wolski. I'm going to give the CMS side of 7 00:00:34,860 --> 00:00:43,080 the updates on the recent TD x measurements from CMS. Okay, so I won't go 8 00:00:43,080 --> 00:00:47,040 into this again because there's been a very nice explanation and my math in the 9 00:00:47,040 --> 00:00:52,770 previous presentation, but as you know, the dy dx is an important measurement 10 00:00:52,770 --> 00:00:58,170 because it provides us with this direct proof of the topic our coupling due to the 11 00:00:58,170 --> 00:01:03,060 top mass and and the fact that he Couples coupling is predicted to be proportional 12 00:01:03,060 --> 00:01:08,790 to the fermion mass. So the teacher Higgs searches from the CMS side several 13 00:01:08,790 --> 00:01:14,280 analyses, each targeting a specific x decay channel I performed and these these 14 00:01:14,280 --> 00:01:21,930 analyses combine several all possible top key loads. And due to the DX production 15 00:01:22,050 --> 00:01:26,280 having a relatively low cross section, we in order to observe the process we 16 00:01:26,280 --> 00:01:33,510 required to combine several multiple multiple funnel states and these in CMS, 17 00:01:34,230 --> 00:01:39,930 these funnel states 32 x two gamma gamma t x two BB, d dS multi Platinum TT zz for 18 00:01:39,930 --> 00:01:45,030 lead time and you can see the the relative branching fractions for each of these 19 00:01:45,030 --> 00:01:50,130 things that came out on the right hand side. Now, this led to okay this is now a 20 00:01:50,130 --> 00:01:54,300 fairly old results, but this led to the first observation of the dy dx process on 21 00:01:54,300 --> 00:02:00,450 the fourth of June 2018 by the CMS collaboration and the tdsb Combination 22 00:02:00,450 --> 00:02:06,360 here combined 42 eggs analyses with seven, eight TV and 13 TV data sets and led to 23 00:02:06,450 --> 00:02:13,410 signal an extraction of signal strength parameter 1.26 plus 0.31 and minus 0.26 24 00:02:14,070 --> 00:02:18,120 which gave us 5.2 sigma and which was enough to declare 25 00:02:19,260 --> 00:02:21,480 the clarification declare an observation 26 00:02:22,740 --> 00:02:26,040 and you can see on the bottom left hand side of the slide the the major 27 00:02:26,040 --> 00:02:28,800 uncertainties associated with this measurement. So, these are coming from the 28 00:02:28,800 --> 00:02:33,300 signal theory uncertainty on the increase of tt externalization for the backgrounds 29 00:02:33,300 --> 00:02:38,460 This is for the major uncertainties for theory from from theory come from the TTB 30 00:02:38,460 --> 00:02:43,080 BCC predictions and ttv predictions are these the dominant backgrounds in the TT 31 00:02:43,770 --> 00:02:48,960 TT Exmouth lepton analysis and then experimentally we struggle with the left 32 00:02:48,960 --> 00:02:52,620 on trigger identification and isolation efficiency uncertainty and then the Miss 33 00:02:52,620 --> 00:03:00,300 identify left on predictions. So, today I'm just going to highlight the the recent 34 00:03:00,330 --> 00:03:04,800 updates since this combination, so this is the GTX multi lepton analysis that he 35 00:03:04,800 --> 00:03:09,600 takes BP and the TT x gamma gamma. And you can see that for the multi taps on MDB 36 00:03:09,660 --> 00:03:15,570 slightly older results using a combination of 2016 and 2017 datasets, whereas the GTX 37 00:03:15,600 --> 00:03:19,440 gamma gamma studies are really recent result pretty nice result using a full on 38 00:03:19,440 --> 00:03:25,020 two data set. So, from the TT Higgs Morgan Epson site we target the Higgs boson 39 00:03:25,020 --> 00:03:28,440 decaying to a pair of vector bosons and Tao leptons, you can see the signal on the 40 00:03:28,470 --> 00:03:33,720 Fineman diagram or an example of a signal final diagram on the top right. And the 41 00:03:33,720 --> 00:03:37,920 benefits of this channel is relatively clean. However, the Higgs reconstruction 42 00:03:37,920 --> 00:03:43,740 is still quite difficult. The backgrounds for the analysis are divided into 43 00:03:43,740 --> 00:03:47,280 irreducible, irreducible backgrounds. Now the irreducible backgrounds are largely 44 00:03:47,280 --> 00:03:53,100 from TTC entities, which are shown on the right hand side in the middle in the two 45 00:03:53,100 --> 00:03:58,350 example, Fineman diagrams, and then the reducible backgrounds, mostly from events 46 00:03:58,350 --> 00:04:03,270 where at least one reconstructed title On is not due to a prompt. So, these these 47 00:04:03,270 --> 00:04:08,490 are the fake leptons and then other Higgs production modes in this in this analysis 48 00:04:08,490 --> 00:04:12,030 are treated as backgrounds and set the standard model expectation. And on the 49 00:04:12,030 --> 00:04:16,260 bottom of the slide you can just see some sort of schematics of sources of fake up 50 00:04:16,260 --> 00:04:20,550 from so typically coming from the semi electronic the hadron okay photon 51 00:04:20,550 --> 00:04:28,020 conversions are all miss identified like yes so on slide seven and so, just to 52 00:04:28,020 --> 00:04:33,870 mention here that what we call tight leptons are typically defined by what we 53 00:04:33,870 --> 00:04:38,070 have this specially trained leptons DDT. So if you look at the little cartoon on 54 00:04:38,070 --> 00:04:40,590 the top right, you essentially just have this PDT that's used to describe 55 00:04:40,650 --> 00:04:44,100 discriminate between prompt and fake leptons. And then then we define a count 56 00:04:44,100 --> 00:04:47,460 on this BDT that defines our types when we compare leptons and then we have also 57 00:04:47,460 --> 00:04:54,630 favorable leptons. So anti leptons are used to to define the event categories as 58 00:04:54,630 --> 00:04:58,920 shown on the bottom right hand side of the slide. And then the fake leptons to these 59 00:04:58,920 --> 00:05:06,360 fuckable subcategory used to define the fake background. So, the reducible fake 60 00:05:06,360 --> 00:05:10,350 flips backgrounds estimate from data in this analysis using a fake factor method 61 00:05:11,430 --> 00:05:16,140 whereas the irreducible background processes and model using Monte Carlo and 62 00:05:16,140 --> 00:05:20,190 then in order to make sure that we don't have any overlap between our Monte Carlo 63 00:05:21,090 --> 00:05:25,740 Monte Carlo for instance and any and what we actually use to model our fake 64 00:05:25,740 --> 00:05:28,710 background we have this general Rico geometric matching of leptons in the 65 00:05:28,710 --> 00:05:32,880 simulation. And then so as I mentioned events are categorized according to the 66 00:05:32,880 --> 00:05:39,330 number of and the sign of the of the leptons. And then we also have these 67 00:05:39,630 --> 00:05:44,370 additional btw TTC and divers on image control regions to constrain our 68 00:05:44,370 --> 00:05:52,110 background processes. For the so in each category of BDT signal versus background 69 00:05:52,110 --> 00:05:57,690 is defined who is trained, except from the four laps on channel where the will 70 00:05:57,690 --> 00:06:01,440 because the, the channel statistically limited is difficult to train an MBA here. 71 00:06:01,740 --> 00:06:07,290 So the issue is a cutting count. And then the signal strength parameter is extracted 72 00:06:07,590 --> 00:06:11,490 from a been maximum likelihood fit of the discriminants data, where no prior 73 00:06:11,490 --> 00:06:17,490 assumptions are made on any of the TT W, WWE or TTC backgrounds. For the observed 74 00:06:17,520 --> 00:06:25,710 signal strength in the 2017 data set we see 0.75 plus 0.46 minus 0.43 giving an 75 00:06:25,710 --> 00:06:30,210 observed significance of 1.7 sigma. And then for the combined one this is combined 76 00:06:30,210 --> 00:06:36,690 with the 2016 results that we see a signal strength parameter of 0.96 plus 0.34 and 77 00:06:36,690 --> 00:06:42,540 minus 0.31 which gives us evidence of this process at 3.2 sequence significance 78 00:06:43,320 --> 00:06:44,130 and just 79 00:06:44,910 --> 00:06:47,250 a nice comparison with the previous speakers. 80 00:06:48,660 --> 00:06:54,900 tGwU rate modifiers and TTC rate modifiers as well and we have the 2017 only for this 81 00:06:55,200 --> 00:07:00,990 and we see a rate modifier of 1.42 on the TT w process and rate modifiers 1.69 on 82 00:07:00,990 --> 00:07:05,430 the TTC process. Now, on the bottom right hand side of the slide, you can see also 83 00:07:05,430 --> 00:07:09,510 that the theoretical sources of uncertainty are quite large in this 84 00:07:09,510 --> 00:07:14,280 analysis, and also the uncertainty on the perfect background yield is quite dominant 85 00:07:15,900 --> 00:07:19,860 for the TT x to be B analysis, and this channel benefits from a large project 86 00:07:20,280 --> 00:07:27,330 branching fraction suffers from a huge TT bar reducible TT plus BB or at least one B 87 00:07:27,330 --> 00:07:31,620 background and also QC D mochi a background in the fully hadron channel. 88 00:07:33,540 --> 00:07:36,240 And then the combinatorics of the final state is very difficult. So, there's no 89 00:07:36,360 --> 00:07:40,980 unambiguous way still to reconstruct the last speaker of the Higgs. And the 90 00:07:40,980 --> 00:07:44,520 baseline selection is highlighted here in the table. But you have the three channels 91 00:07:44,520 --> 00:07:49,440 basically fully hadron XM electronic and electronic which is essentially defined by 92 00:07:49,440 --> 00:07:55,500 the number of leptons in the events. And then similarly, well sorry, in the in the 93 00:07:55,500 --> 00:07:58,620 fully hydronic channel. There's a requirement on this clock, glue and 94 00:07:58,620 --> 00:08:04,590 likelihood ratio Which is used to define to identify lights against your objects. 95 00:08:05,100 --> 00:08:08,580 And you can see that there's additional control regions in order to constrain the 96 00:08:08,610 --> 00:08:15,210 QC D background. The analysis is rather complicated. So, you have the stimulus on 97 00:08:15,210 --> 00:08:18,270 a candlelit follicle, you have tronic channels or sub cat or further categorized 98 00:08:18,270 --> 00:08:22,380 according to the number of jets and the jets and then in the semi electronic 99 00:08:22,380 --> 00:08:26,370 channel and you have a detail on a multi classification deep neural network which 100 00:08:26,370 --> 00:08:30,990 is which has six output nodes each node targeting either the signal process or one 101 00:08:30,990 --> 00:08:38,280 of the TT plus jets backgrounds. In the dialect on channel you then have the the 102 00:08:38,280 --> 00:08:42,240 up BDT which is trained, which is just a binary BDT in this case, which is then 103 00:08:42,240 --> 00:08:45,150 used as the input to the funnel fit and then the fully hydronic challenge you have 104 00:08:45,150 --> 00:08:53,040 the matrix element method for for this and then the sorry the the signal strength 105 00:08:53,040 --> 00:08:58,620 parameter is extracted from above maximum likelihood fit to data and this channel 106 00:08:58,620 --> 00:09:04,770 using the combined 2016 2017 results season observed significant significant 107 00:09:04,770 --> 00:09:09,870 3.9 sigma and expected to 3.5 sigma extracting a signal strength parameter of 108 00:09:09,870 --> 00:09:16,530 1.15 plus 0.32 and minus 0.29. And this large uncertainties from the normalization 109 00:09:16,530 --> 00:09:20,310 of the dominant backgrounds being the TT ball plus jets backgrounds and the TT x 110 00:09:20,310 --> 00:09:27,180 renormalization factorization scale. Now on to the final TD channel So, 32 x to 111 00:09:27,180 --> 00:09:30,450 gamma gamma this analysis suffers from a small branch infraction but has a very 112 00:09:30,450 --> 00:09:33,600 clean signal which means you can reconstruct the dye photon invariant mass 113 00:09:33,990 --> 00:09:40,320 with excellent precision. The dy dx signal forms this narrow peak on the lifeboat on 114 00:09:41,850 --> 00:09:46,380 straight and narrow divert on very mass peak on the smoothly falling continuing 115 00:09:46,380 --> 00:09:50,280 background. And the main backgrounds are really from non resident die photon 116 00:09:50,280 --> 00:09:55,680 production and also from non TT Higgs Higgs production. So, the analysis device 117 00:09:55,680 --> 00:10:00,000 that is divided up into two channels electronic and hydronic category defined 118 00:10:00,330 --> 00:10:04,410 by the number of leptons in the event and they also have a different slightly 119 00:10:04,410 --> 00:10:10,950 different jet requirements, but both are requiring a dye photon event with photon 120 00:10:10,950 --> 00:10:18,690 and very mass between 100 and 180 gV there's a dedicated binary BDT. So, the so 121 00:10:18,690 --> 00:10:23,370 called BD background BDT which is used to find the signal regions as you can see on 122 00:10:23,370 --> 00:10:28,050 the right hand side here and for the hydronic electronic channels. So, for this 123 00:10:28,050 --> 00:10:33,510 the signal background are mostly modeled using Monte Carlo and this and this is 124 00:10:33,510 --> 00:10:37,110 what's used for the training and the the backgrounds are either gamma plus yes 125 00:10:37,110 --> 00:10:43,080 gamma gamma plus Yes, TT plus Yes, TT plus gamma t plus gamma gamma and v plus gamma 126 00:10:44,490 --> 00:10:50,670 boots, because the Monte Carlo struggles to model this gamma plus jets background 127 00:10:50,670 --> 00:10:56,400 process very well in the hydronic channel. This process is derived from data, the 128 00:10:56,400 --> 00:11:00,870 categories then the category boundaries are for the signal strength and The RCP 129 00:11:00,870 --> 00:11:04,080 measurement shown in the thin and thick dashed lines you can see on the right hand 130 00:11:04,080 --> 00:11:09,690 side. So, you have hydronic one through four and then the CP half degrees on the 131 00:11:09,690 --> 00:11:12,480 left hand block and on the right hand side you have the same to the electronic 132 00:11:12,480 --> 00:11:16,560 channel. And it just worth noting that the gray region on the plots has basically 133 00:11:16,560 --> 00:11:19,680 shaded out because the events are used in the analysis. 134 00:11:21,240 --> 00:11:24,870 Then finally, a maximum likelihood fit is performed on the divert in the very mass 135 00:11:24,870 --> 00:11:29,520 distribution. And this is used to extract the cross sections on the branching ratio 136 00:11:29,550 --> 00:11:35,700 and the signal strength. There's in the likelihood fits the dy dx signal is 137 00:11:35,700 --> 00:11:39,570 parameterize using this double sided crystal ball same as using an additional 138 00:11:39,600 --> 00:11:43,950 Gaussian function. And then the background is estimated by fitting various functional 139 00:11:43,950 --> 00:11:48,840 forms to the dye photon mass distribution sideband read sideband data and this is 140 00:11:48,870 --> 00:11:54,090 the functional form is derived using discrete profiling method. The dominant 141 00:11:54,120 --> 00:11:57,030 theory uncertainty This channel is around 8%. That's coming from the GD x cross 142 00:11:57,030 --> 00:12:01,800 section and the dominant experimental uncertainties around 260 Coming from be 143 00:12:01,800 --> 00:12:09,600 tagging photo ID jazz jr on January scale to any resolution and under Lumosity. You 144 00:12:09,600 --> 00:12:15,570 can see the extracted value for the cross section diamond branching ratio of 1.56 145 00:12:15,570 --> 00:12:24,330 plus 0.34 minus 0.32 observed and then expected is 1.13 which gives us a sigma 146 00:12:24,390 --> 00:12:32,100 parameter of 1.38 plus 0.36 minus 0.29. And this provides us with the the observed 147 00:12:32,100 --> 00:12:36,930 significance of 6.6 sigma So, this is an observation and you can see here on the 148 00:12:37,080 --> 00:12:42,570 top right hand plot the nice boyfriend on a very mass distribution and then you can 149 00:12:42,570 --> 00:12:47,460 see the in the inlay the likelihood scan, not likely and scan you can see at zero 150 00:12:47,460 --> 00:12:55,680 you got this nice well above Six Sigma, I'm crossing for the ECP study. So, the 151 00:12:55,680 --> 00:12:59,520 three level topic our coupling and SCP structure also tested in this analysis 152 00:12:59,580 --> 00:13:03,090 with us CP structure the TT x amplitude can be parameterized using this equation 153 00:13:03,090 --> 00:13:07,770 here, where you can see capital T and Kappa till the T being the CP even CP or 154 00:13:07,770 --> 00:13:11,400 decal couplings and in the Standard Model set to their respective values of one and 155 00:13:11,400 --> 00:13:20,760 zero and then the CP structure is measured using this FCP htt variable where FCP Hct 156 00:13:20,760 --> 00:13:24,750 being equal to one is the pure pseudo scaler CP odd mode model of the CP 157 00:13:24,750 --> 00:13:32,130 structure and equal to zero is the pure CP even CP structure and then you also have 158 00:13:32,730 --> 00:13:40,470 provided you with like sort of the intermediary model. Then in this the CP 159 00:13:40,500 --> 00:13:45,000 analysis an additional BDT is trained for distinguishing CP even in the CP od 160 00:13:45,000 --> 00:13:49,140 contributions whether it's PDT output is meant to represent this b d zero 161 00:13:49,140 --> 00:13:54,960 observable and then a simultaneous So, essentially you have this categorization 162 00:13:54,960 --> 00:13:57,570 according to the background etc and then you have a further categorization 163 00:13:57,570 --> 00:14:01,260 according to the steezy urban these dizzy ribbons So, you can see in the bottom 164 00:14:01,260 --> 00:14:07,830 right hand side you have the various bins in the prompt. A simultaneous fit to the 165 00:14:07,830 --> 00:14:13,140 dye photon very mass spectrum is done with the signal strength parameter and the FCP 166 00:14:13,140 --> 00:14:20,010 TT x unconstrained and the analysis is really statistically dominated here. And 167 00:14:20,190 --> 00:14:23,670 yeah worth noting that the couplings to other particles is constrained to the 168 00:14:23,670 --> 00:14:29,100 Standard Model values and the cavities trying to be positive. So, the observed 169 00:14:29,100 --> 00:14:36,060 value of the FCP TT Higgs is equal to zero plus a minus 0.33 which gives us 95% 170 00:14:36,060 --> 00:14:43,950 confidence level limit on the fcpa tt x to be less than 0.67. And this allows us to 171 00:14:43,950 --> 00:14:50,040 exclude it 3.2 sigma the pure CDO scalar model 172 00:14:52,320 --> 00:14:54,450 so I can finally summarize on slide 16. 173 00:14:55,980 --> 00:14:59,970 Since the observation of the tedious process by both CMS and Atlas, the most 174 00:15:00,000 --> 00:15:04,740 has really been targeting these single channel observations of TT eggs. I've 175 00:15:04,740 --> 00:15:08,400 presented here the first single channel observation of the TT x process using the 176 00:15:08,400 --> 00:15:14,850 CMS detector with a significance of 6.6 sigma observed and furthermore, the tree 177 00:15:14,850 --> 00:15:20,130 level topic outcroppings and CP structure has been tested the pure pseudo scale 178 00:15:20,130 --> 00:15:28,860 model of the CP structure and the htt coupling, FCP was TT x equals one is 179 00:15:28,860 --> 00:15:33,330 excluded the 3.2 sigma level. The most recent publication from the TT is multi 180 00:15:33,330 --> 00:15:40,920 lepton and BB channels have been using the 2016 and 2017 datasets combined and 181 00:15:40,920 --> 00:15:45,300 reported and they both reported evidence in the individual channels. So the outlook 182 00:15:45,300 --> 00:15:49,620 at the moment it seems to be that basically we're expecting some some 183 00:15:49,620 --> 00:15:52,680 exciting updates from the multi lebreton and BB teams using the full one two 184 00:15:52,680 --> 00:15:56,310 datasets. And then I guess in run three we'll start to look look towards 185 00:15:57,030 --> 00:15:59,760 differential measurements in SPSS and things like this 186 00:16:00,780 --> 00:16:11,730 Thank you very much. Thanks a lot. I already see 200. Laura, let me see you 187 00:16:11,730 --> 00:16:12,720 come talk now. 188 00:16:15,450 --> 00:16:22,860 Can you hear me? Yeah. Hello. Hi, Josh. I have a question from the 189 00:16:24,120 --> 00:16:26,700 simple one. One thing that 190 00:16:29,010 --> 00:16:31,800 I was wondering about is just this measurement 191 00:16:33,780 --> 00:16:37,440 it seems to extract from 192 00:16:39,330 --> 00:16:44,610 studying the background for teaching, how do they compare with genetic variation so 193 00:16:44,610 --> 00:16:52,440 that we have heard about top number one is this. Are they I mean, do they? Are they 194 00:16:52,470 --> 00:17:01,080 in agreement? So you see an excess on both sides or not? So I don't think Sorry, 195 00:17:01,230 --> 00:17:04,470 Laura a little bit, but I think I know what you're asking. You're asking about 196 00:17:04,470 --> 00:17:08,910 the compatibility between the direct measurement of the TV and the and 197 00:17:09,060 --> 00:17:10,860 yeah, yeah. Yeah. 198 00:17:11,250 --> 00:17:18,090 So typically, yes. The TT TT TT z tend to come out in both the direct measurements 199 00:17:18,090 --> 00:17:22,230 and the measurements that we saw these on. It's worth noting These aren't cross 200 00:17:22,230 --> 00:17:25,170 section measurements, right? these these are, these are roadmap rate modified 201 00:17:25,170 --> 00:17:28,200 measurements, so you can't do a direct comparison. The uncertainty is quite large 202 00:17:28,200 --> 00:17:32,730 on these as well. But yeah, in general, the ttv measurements and the teeth and the 203 00:17:32,790 --> 00:17:38,190 extraction of the rate modifiers tend to be above above the standard predicted 204 00:17:38,190 --> 00:17:38,760 values. 205 00:17:39,900 --> 00:17:44,160 In both cases, both cases Yeah. Okay. Thank you. 206 00:17:46,920 --> 00:17:49,800 Yeah, I see more questions. Fabio. 207 00:17:51,870 --> 00:17:59,430 I describe your maturity. Thanks for the nice talk. Well, maybe should come in and 208 00:17:59,430 --> 00:18:05,430 request on the show. comment is that of course was assess results on CP 209 00:18:05,430 --> 00:18:10,350 interaction topics but was shown in another session. So I don't know how these 210 00:18:10,350 --> 00:18:17,880 sessions have been organized. But the question is on slide 10, when you show the 211 00:18:17,880 --> 00:18:23,610 results, I mean, they did the 2016 2017 results are quite far apart. Of course, if 212 00:18:23,610 --> 00:18:28,710 you take the total uncertainty, they are reasonably in agreement, but this is 213 00:18:28,710 --> 00:18:33,840 dominated by the systematic says must be somewhat correlated. So did you check the 214 00:18:33,840 --> 00:18:37,050 compatibility between the results of the two year 215 00:18:38,730 --> 00:18:41,670 take into account potential correlation with this system? it? 216 00:18:43,230 --> 00:18:48,000 Yeah, I think this I think this was indeed checks on but this is essentially a reload 217 00:18:48,000 --> 00:18:54,810 of the 2016 analysis, I think. So there's been no real optimization. I think they 218 00:18:54,810 --> 00:18:58,410 were combined at the data card level. I mean, one of the analysts might be able to 219 00:18:58,410 --> 00:19:05,250 correct me here. But there there would have been updates to the, to the analysis 220 00:19:05,250 --> 00:19:11,970 in the 2017 which I think included this dnn in some electronic journal but I mean, 221 00:19:11,970 --> 00:19:12,930 I leave that 222 00:19:14,940 --> 00:19:17,700 to one of the analysts is if see if they actually check 223 00:19:18,570 --> 00:19:19,230 any further. 224 00:19:22,020 --> 00:19:28,260 So, okay, Kirsten has a question. And 225 00:19:28,290 --> 00:19:34,620 yes, I have a question on slide age for the th movie leptons giving the rate 226 00:19:34,620 --> 00:19:40,920 modifiers for TT W and TTC for 2017. I don't know if that means that you're have 227 00:19:40,920 --> 00:19:47,190 a different fit model for the 2016 since you're not giving these and if so, how do 228 00:19:47,190 --> 00:19:51,090 you then combine the two years given that these icons are really important? 229 00:19:54,390 --> 00:20:03,390 So these rate modifiers were not floated freely in the In the 2000, and sorry, 230 00:20:03,660 --> 00:20:10,260 2016 2017 final result, what we do is we do a check in, what we did is we did a 231 00:20:10,260 --> 00:20:17,850 check on the 2017 data set in order to extract just the 2017 signal strength and 232 00:20:17,850 --> 00:20:21,390 then the rate modifies only in the 2017 result. 233 00:20:22,530 --> 00:20:24,090 So this was this was kind of done 234 00:20:25,140 --> 00:20:30,480 after the after the fact, if you see what I mean. So, the 2000 so the 2000 there's a 235 00:20:30,480 --> 00:20:35,850 final result with the 2007 16,017. They're not floating but they've done floated in 236 00:20:35,880 --> 00:20:41,790 2017 only fit in order to check the values. So, 237 00:20:41,880 --> 00:20:47,670 maybe I can figure a chance to comment on this actually in the in the combine the 16 238 00:20:47,700 --> 00:20:54,060 Plus, I mean, we have both numbers. So, we can follow up maybe in a further 239 00:20:54,060 --> 00:20:57,180 discussion session about exactly all the details. 240 00:21:00,480 --> 00:21:03,630 Okay, so last question from Lisa. 241 00:21:10,889 --> 00:21:18,149 Yeah, I have two comments. First to Lauer's question about modifiers that come 242 00:21:18,149 --> 00:21:19,289 out higher 243 00:21:20,760 --> 00:21:27,900 than expected in CMS. They come out higher in Atlas two, we actually had special 244 00:21:27,960 --> 00:21:35,940 comparison during the Elysee top working group meeting. So it's quite consistent. 245 00:21:35,970 --> 00:21:41,460 Maybe the magnitude is a bit different. However, the trend is clearly the same. 246 00:21:42,990 --> 00:21:50,160 And in fact, the last question about floating modifiers I thought it since I 247 00:21:50,460 --> 00:21:57,060 was reading this paper 2016 carefully, again, starting with these modifiers I 248 00:21:57,060 --> 00:22:04,020 think 2016 paper has two results. One is fixed t Tw and another was floating to Tw. 249 00:22:04,440 --> 00:22:10,230 And floating t w has smaller significance because if you float it in w it goes up 250 00:22:10,680 --> 00:22:19,470 and the signal comes out a bit smaller. So that one that option was used for the, for 251 00:22:19,470 --> 00:22:23,040 the, for the combination. No. 252 00:22:23,760 --> 00:22:30,180 Yeah, yeah. So I think with the 2016 2017 data the both workloads in the India 253 00:22:30,300 --> 00:22:31,500 Exactly, yes, 254 00:22:31,530 --> 00:22:35,130 yes, I come from that and the numbers are in the available 255 00:22:35,550 --> 00:22:37,890 documentation in the paper. Yeah. 256 00:22:38,100 --> 00:22:40,230 Yeah. Okay, 257 00:22:40,290 --> 00:22:45,150 so I think we should really move on otherwise, we will be late for the plenary 258 00:22:45,150 --> 00:22:45,900 session. 259 00:22:49,920 --> 00:22:55,020 I see I still see a hand from Laura. Is it urgent? 260 00:22:56,910 --> 00:23:01,980 Or is it just okay, no, sorry. Left. Tomer sorry okay.