View attachment 6101576
WTH!!!! IS THIS TRUE!??!?!
If this is true, I hate Nick and Kayla so fucking much. They deserve the rope. I feel sick.
This is not a cope, I believe Nick is a degenerate and he better starts working on fixing himself because those children deserve better. I do believe the kid used cocaine but the value read on the test is bogus.
People who OD with cocaine usually have around 250 mg/L, but that's blood concentration levels. If you do cocaine, whatever concentration you find in your hair was probably a higher concentration in your blood. Remember that the tests look for a metabolite product of your own body trying to get rid of it, but it can only do so fast. That's why you can do a lot of heroin throughout your life, but having too much too fast can kill you. One thing is the irreparable damage the substance causes to your body while it gets metabolized, but there's also the fact that our blood is a very delicate solution whose balance is quite fragile. Having too much of anything in it at any given time causes your organs to fail and leads to death, even water, oxygen, nutrients, glucose, or proteins floating freely in it. So for such a young child to have so much of the metabolite in her hair sounds like she should have been to the ER first.
A cutoff value in this equipment usually means "do not trust any value above or below this threshold," and to understand why, it is necessary to explain how we get the information we want from the analyzed samples. DISCLAIMER: I am not a qualified expert on this subject.
The principle of operation consists of irradiating the processed sample present in a medium. This is not a test where you fire a laser at a molecule and depending on how it reflects, it raises a flag and a variable bool iscocaine = true;. You are not really grabbing the metabolite of interest and putting it on a scale; what you get is a concentration—how much of it there is in the sample. Since you know the initial sample mass, adjusted by some additive noises in the processing itself (more on that later), you can make a rough estimate of the actual mass. But you are not directly measuring it.
Light is nothing but little packets of electromagnetic energy. Because of the curious property of electrical charges themselves, an electrical charge is like a weight over an extended blanket. It extends its influence all over the field and is stronger the closer you are to it, much like planets and gravity, but the charges themselves react to the presence of other charges by attracting or repelling them. Moving charges imply applying some sort of mechanical energy, much like slapping still water. Those waves carry and dissipate the energy of your slaps through the medium. That's literally what light/radiation is, and Maxwell's equations describe how those waves interact in the medium. Essentially, the disturbances of the electrical field due to moving charges induce orthogonal magnetic fields, which through space induce an orthogonal electrical field, and so on, carrying some of the energy being dissipated through the field indefinitely, losing energy as it dissipates through space like an expanding sphere that gets weaker at a 1/r³ rate. Atoms are nothing but packets of charges that react to these disturbances, and in the end, the charged particles in the atoms exchange energy through photons like this. Given that scientists know exactly how these interactions occur, they theoretically know "if you irradiate this particle with this, it should emit photons of these characteristics (frequencies)." So you know exactly how much energy it would reflect back, giving a very specific power spectrum signature, which you use to identify the molecules you are looking for. But it is not like you can sense this power spectrum signature; you pass that light through a lens, focus it on a photosensor, which transduces that into an electrical voltage we can make sense of and due to properties of waves you can mathematically split the signal into an infinitely sum of sinusoidal waves with different frequencies, each one of these sinusoidal signals is a base and by doing a point product between the base and the signal you obtain the portion in which these frequency components project over the base, so you get a scalar coefficient that lets you know how much of that frequency exist on your signal, and by sweeping through all the bases you can map your signal into a frequency spectrum through a Fourier transform and obtain a continuous (or discrete, if the signal analyzed is periodic) frequency spectrum. But since this sum of component signal bases is infinite, you can't really map your entire signal space to a frequency space because you are limited in bandwidth, and this limitation in an on it's own subtract from the frequency spectrum we intent to analyze to identify substances on the medium, this is an information loss, think of it as compression with loss, because you transform a series of time signal samples into a series of coefficients that correspond with one frequency on your frequency map and due to how these things between continuous and discrete domain works, you would need an infinite number of samples, for all time the signal existed (and even before, due to some anti causal properties of signals) which gets you an infinite number of coefficients, each one corresponding to a frequency of a fraction of the frequency you are sampling that signal to (more on that later) so if you theoretically used every frequency your samples mapped, calculated how much of each frequency there is in your sample space added those waves back properly scaled by the coefficient (and phase) you just extracted from the transform you would be able to reconstruct the original signal but with an error, the more samples you get the lower this error is going to be. The point of it, it's this processing itself adds noise to the data you are trying to interpret.
That electrical voltage reacts to the interactions of light in the medium where you perform the test containing the substances you care about. Because of these self-inducing interactions between fields and how waves get absorbed or reflected by other molecules in the medium, their black body radiation itself will send light that will affect the sensor. What's problematic is that the influence of how these fields interact induces and destroys each other, producing reactive effects. Much like slapping the water once, even when you are not applying energy to the system, the energy you introduced continues to influence it until it dissipates into irrelevance. Here, those reactions can interfere because they get added to what you are trying to read. This is the challenge that engineers and scientists designing this equipment face.
Making sense of all that noise is not trivial, and this is where information theory and statistical analysis come to the rescue. What you have here is a communications channel, much like radio systems or a sonar blasting sound through a medium trying to make sense of the reflected noise to map a surface. Here, you are trying to blast a known spectrum of energy (preferably an impulse, like a controlled EMP that has all the frequencies, so you get to know how the medium reacts in every frequency, but that's theoretically impossible) to characterize the medium and the materials you care about and how they behave through time. As it is impossible to extract exactly what we care about, we instead create a model utilizing the statistical characteristics of the information you measured and use that. These are stochastic processes carrying information we care about swimming in an ocean of noise. On top of that, the signal has to be sampled so we can understand it with computers.
Depending on how fast you take those samples, energy faster than your sample rate would continue to contribute to the signals you measure because from the digital world, that's how it looks to you. You are too slow, and the signal is changing faster than you can capture it. It is like footage of helicopters in sync with the camera's sample rate that looks like the rotor is going very slow or not moving at all. That's aliasing, and you will find it whenever you process streams of data that change through time, like a video buffer rendering a video game. In a spectrometer, it is not a cool effect; it is noise and annoying. On top of that, how many bits of information (resolution) the data itself has, how many "levels" it can have also adds noise because having too few levels (bits) filters the number of signals from the real world you can reproduce. Quantifying the data limits the signal bandwidth, meaning your information loses a lot of power to the noise. This is quantified as a signal-to-noise ratio. This is why we cared about bits in video games before; because this gets perceived by you as audible background noise, little colors, etc. That's why we stopped caring about bits after 32 because audio and video at 24 bits generally cover the limits of our perception (I said generally). Since instruction sets tend to be smaller, having instruction buses of more than 64 bits wastes space, so modern architectures often use a special co-processor for vector processing.
The challenge is to understand the signatures of the sample. Due to discrete mathematics, what is continuous in time is discrete in frequency and vice versa. The number of samples you keep at any given time limits you in frequency. Remember, this causal situation has old samples getting added to new samples, and that’s how you do digital filters. The more samples you keep, the more information from the signal you preserve. In continuous time, your signal is limited by bandwidth, but in discrete time, it is limited by memory.
The point here is to grab that sample memory and perform a series of mathematical transformations to extract the information we care about. If you do a digital fast Fourier transform, you perform an extremely complex series of multiplications and additions on the samples in a beautiful butterfly configuration, obtaining the frequency spectrum, which is just doing a base signal change over that signal space and throws back a mathematical function that maps the coeficients I mentioned earlier (this is also sort of how mp3 works, a similar process occur when compressing a WAV which is heavier because it's all a WAV is, samples of an audio waveform, much like vinyls, where the audio wave itself is engraved on a surface where we can reproduce it that's why it's much more efficient just to use the disk space to manage a file system and store files with those coeficients in it and let the mp3 player to remember what waves each coefficient correspond to so we can reconstruct the original audio signal through a speaker ) According to your test, every characterized substance in the medium peaks at different frequencies. But it is not that easy because those signals continue reacting with themselves and the medium, and they are not perfectly linear. Time signal-dependent noise is extremely difficult to filter. Performing all the operations and stimuli to get the best range and resolution at the best cost for your equipment leaves only so many ways it can make sense. Noise is present in all frequencies, including those associated with the substance you are looking for. Even if the sample has no cocaine, the system running by itself with no sample will still show some energy in the spectrum. It is up to your digital signal processor to identify it as noise. In doing this, you continue to distort your original information, affecting the linear operational range. After some concentration level, the data might be useless because it doesn’t tell you anything useful anymore. If the channel is too saturated, the metabolite sample won’t let you register other substances, making you believe you have more of it than you do.
Due to all this uncertainty,, you also need to characterize a typical sample from a human not exposed to the metabolite and establish a threshold. Using some math, you declare, "beyond this point, the chances of it being just noise are less than 5%, so consider it positive." This is the statistical anomaly threshold failsafe, this is were all of these sources of noise I described (plus tons of others I ignore for the sake of not driving myself insane or your with such a long post, I'm sorry guys, hopefully you will find it informative) , coalesce into an additive component to the sample you are taking from the sensor, which in turn is a realization of the random variable you know the statistical characteristics off (that being how these realization behave when taken from human samples of people we absolutely know have no cocaine on their system) most of the noise is uniformly random in nature which means that for every additive contribution by the noise there might be another subtracting from it in equal measure, and by understanding how much signal power density is over the noise you get a pretty good idea of how much your measured samples stray off the real value in you are trying to read in average, that's why these equipment specify an error of any given percentage because it's an statistical mean, but it does not mean that at any given moment, any given sample might have a huge error without you noticing. This is why, when processing statistical information, you discard values too far from the center of mass, because there is a very real but unlikely chance, that all these sources of noise add or subtract constructively resulting in a very large error. Therefore, on these type of signal processing, it’s generally a good idea to distrust values too low or too high. Additionally, there is the OSI Model Layer 8 Factor: human operator being retarded. A botched or contaminated sample could explain an anomalous reading. That's why it is a good idea in situations like this to take 3-5 samples just in case. This is what I believe happened: the kid did use cocaine, but the sample was mishandled, they probably used too little of it resulting in a bogus reading.
I know there's an argument that if that was the case, they would have reordered a test, but these are government workers. They get existential dread at the mere thought of doing their job. Much less would they move a finger if it could benefit the man they are trying to pressure into a plea deal to avoid a trial. and this effectively puts Nick on a double bind, because if he wants to dismiss this as evidence he would have to challenge it in the court system which probably ( I don't know) would imply explaining all this shit to a judge, who may or may not get it, or getting a credited expert witness that can convince him of it. Or I guess he could convince the Jury this was the case and given how complex this shit is may not even be worth it because this is just one little component to the child endangerment charges and even if you manage to convince the jury this results is bogus or even prevent all of this by getting a retest only to generate new court admissible evidence that the kid indeed does cocaine which would fuck him over even more all the while there is an ocean of evidence Nick also has to challenge too, so might not be worth it at all. On the other hand contesting this could mean the difference between doing some real hard time or just getting off lightly with little hard time, a suspended sentence and probation.
tl;dr, Nick is fucked, but there is probably an error on that cocaine test.