Sunday, December 2, 2018

Neil deGrasse Tyson or the accusation of sexual harassment as a political weapon

It is likely that accusations of assault or sexual harassment have become a political weapon in the hands of feminists. As soon as they can not blame a person or want to replace a person, balance such an accusation and the accused is guilty whatever happens. The latest one comes from 2 bitch with sexual assault charges by Neil deGrasse Tyson.

And when you read the content of these "aggression", it's a joke. The first concerns a tattoo on the arm of the cunt who had interested Tyson and the second concerns advances that are not advances if we look at the context. Neil DeGrasse Tyson explained each of these accusations, but it is his last sentence that sums it all up: Why believe me then that I am the accused. Exactly, Tyson, without saying so, points the finger at feminist witch hunts.

Note that this happens where a documentary where Neil DeGrasse Tyson is doing the voice off and it is a pro-GMO documentary. In addition, the Cosmos series that has charmed generations of astronomers will return in 2019 and from what I read, feminists want to put a bitch in the place and get rid of Neil DeGrasse Tyson. No wonder that skeptics hate feminists so much after all these shits as we saw with Elevator Gate.

Monday, November 9, 2015

Implantable wireless devices trigger -- and may block -- pain signals

Implanted microLED devices light up, activating peripheral nerve cells in mice. The devices are being developed and studied by researchers at Washington University School of Medicine in St. Louis and the University of Illinois at Urbana-Champaign as a potential treatment for pain that does not respond to other therapies.

Building on wireless technology that has the potential to interfere with pain, scientists have developed flexible, implantable devices that can activate -- and, in theory, block -- pain signals in the body and spinal cord before those signals reach the brain.
The researchers, at Washington University School of Medicine in St. Louis and the University of Illinois at Urbana-Champaign, said the implants one day may be used in different parts of the body to fight pain that doesn't respond to other therapies.
"Our eventual goal is to use this technology to treat pain in very specific locations by providing a kind of 'switch' to turn off the pain signals long before they reach the brain," said co-senior investigator Robert W. Gereau IV, PhD, the Dr. Seymour and Rose T. Brown Professor of Anesthesiology and director of the Washington University Pain Center.
The study is published online Nov. 9 in the journal Nature Biotechnology.
Because the devices are soft and stretchable, they can be implanted into parts of the body that move, Gereau explained. The devices previously developed by the scientists had to be anchored to bone.
"But when we're studying neurons in the spinal cord or in other areas outside of the central nervous system, we need stretchable implants that don't require anchoring," he said.
The new devices are held in place with sutures. Like the previous models, they contain microLED lights that can activate specific nerve cells. Gereau said he hopes to use the implants to blunt pain signals in patients who have pain that cannot be managed with standard therapies.
The researchers experimented with mice that were genetically engineered to have light-sensitive proteins on some of their nerve cells. To demonstrate that the implants could influence the pain pathway in nerve cells, the researchers activated a pain response with light. When the mice walked through a specific area in a maze, the implanted devices lit up and caused the mice to feel discomfort. Upon leaving that part of the maze, the devices turned off, and the discomfort dissipated. As a result, the animals quickly learned to avoid that part of the maze.
The experiment would have been very difficult with older optogenetic devices, which are tethered to a power source and can inhibit the movement of the mice.
Because the new, smaller, devices are flexible and can be held in place with sutures, they also may have potential uses in or around the bladder, stomach, intestines, heart or other organs, according to co-principal investigator John A. Rogers, PhD, professor of materials science and engineering at the University of Illinois.
"They provide unique, biocompatible platforms for wireless delivery of light to virtually any targeted organ in the body," he said.
Rogers and Gereau designed the implants with an eye toward manufacturing processes that would allow for mass production so the devices could be available to other researchers. Gereau, Rogers and Michael R. Bruchas, PhD, associate professor of anesthesiology at Washington University, have launched a company called NeuroLux to aid in that goal.

An arms race among venomous animals?

This is an Anderson's Pitviper (Trimeresurus Andersoni), a highly venomous snake, photographed by Dr. Kartik Sunagar in the Andaman Islands of India.

In a new study published in the journalPLOS Genetics, scientists at the Hebrew University of Jerusalem have revealed new discoveries about how animal venom evolves.
Venom is a complex mixture of proteins and other toxic chemicals produced by animals such as snakes and spiders, either to incapacitate their prey or to defend against predators. The influence of positive selection (the process by which a protein changes rapidly over evolutionary time scales) in expanding and diversifying animal venoms is widely recognized.
This process was hypothesized to result from an evolutionary chemical arms race, in which the invention of potent venom in the predatory animals and the evolution of venom resistance in their prey animals, exert reciprocal selection pressures.
In contrast to positive selection, the role of purifying selection (also known as negative selection, which is the selective removal of deleterious genetic changes from a population) has rarely been considered in venom evolution.
Moreover, venom research has mostly neglected ancient animal groups in favor of focusing on venomous snakes and cone snails, which are both "young" animal groups that originated only recently in evolutionary timescales, approximately 50 million years ago. Consequently, it was concluded that venom evolution is mostly driven by positive selection.
In the new study, Dr. Yehu Moran at the Hebrew University's Department of Ecology, Evolution and Behavior and the guest scientist Dr. Kartik Sunagar examined numerous venom genes in different animals in order to unravel the unique evolutionary strategies of toxin gene families.
The researchers analyzed and compared the evolutionary patterns of over 3500 toxin sequences from 85 gene families. These toxins spanned the breadth of the animal kingdom, including ancient venomous groups such as centipedes, scorpions, spiders, coleoids (octopus, cuttlefish and squids) and cnidarians (jellyfish, sea anemones and hydras).
Unexpectedly, despite their long evolutionary histories, ancient animal groups were found to have only accumulated low variation in their toxins.
The analysis also revealed a striking contrast between the evolution of venom in ancient animal groups as compared to evolutionarily "young" animals. It also highlighted the significant role played by purifying selection in shaping the composition of venoms.
According to Dr. Yehu Moran, "Our research shows that while the venoms of ancient lineages evolve more slowly through purifying selection, the venoms in more recent lineages diversify rapidly under the influence of positive selection."
The findings enable the postulation of a new theory of venom evolution. According to this theory, toxin-producing genes in young venomous groups that enter a novel ecological niche, experience a strong influence of positive selection that diversifies their toxins, thus increasing their chances to efficiently paralyze relevant prey and predatory species in the new environment.
However, in the case of the ancient venomous groups, where the venom is already "optimized" and highly suitable for the ecological niche, the venom's rate of accumulating variations slows down under the influence of purifying selection, which preserves the potent toxins generated previously.
The proposed "two-speed" mode of venom evolution highlights the fascinating evolutionary dynamics of this complex biochemical cocktail, by showing for the first time the significant roles played by different forces of natural selection in shaping animal venoms.
According to Drs. Moran and Sunagar, "The 'two-speed' mode of evolution of animal venoms involves an initial period of expansion, resulting in the rapid diversification of the venom arsenal, followed by longer periods of purifying selection that preserve the now potent toxin pharmacopeia. However, species that have entered the stage of purification and fixation may re-enter the period of expansion if they experience a major shift in ecology and/or environment."
Source : Journal PLOS Genetics (http://dx.doi.org/10.1371/journal.pgen.1005596)

First-of-its-kind study of puberty timing in men

In the largest genomic analysis of puberty timing in men, new research conducted by scientists at the University of Cambridge and 23andMe* shows that the timing of puberty in males and females is influenced by many of the same-shared genetic factors. The study results are the first to quantify the strongly shared genetic basis for puberty timing between the sexes.
Published this week in Nature Communications, the study is the largest genomic analysis of puberty to look at both men and women. Previous work had identified 106 genetic variants that alter puberty timing in females, and the current study shows that those same genetic factors have very similar effects on male puberty timing. The study looked at genetic information of more than 55,000 male 23andMe customers who consented to participate in research. Following this first analysis, the collected data was compared to existing data from more than 250,000 women. The study focused on the genetic regions that influence age at voice breaking - a distinct developmental milestone that happens to young men as their larynx (voice box) lengthens when exposed to male hormones.
"Our study shows that although there are obvious physical differences in pubertal development between boys and girls, many of the underlying biological processes governing it are the same. It also shows that the age when men's voices break, even when recalled decades after the event, is an informative measure of puberty timing," says co-author Dr. Felix Day from the MRC Epidemiology Unit at the University of Cambridge.
"Until now, most of our understanding of the biological regulation of puberty timing has come from large studies of healthy women, in whom the stages of puberty are usually easier to remember, or studies of patients affected by rare disorders. Research has been scarce in men, largely because investigators have disregarded the accuracy that men can recall pubertal events," explains study lead Dr. John Perry (also from the MRC Epidemiology Unit at the University of Cambridge).
In addition, the study finds five new genetic variants associated with puberty timing, some acting through known hormone pathways, others through previously overlooked hormone pathways.
One of the main aims of this study was to look at the relevance of male puberty timing in impacting health and the development of diseases. The study found that many of the genes involved in puberty timing were also shared with diseases that appear later in life. For most diseases, earlier puberty appeared genetically linked to poorer health outcomes.
Co-lead Dr. Ken Ong (also from the MRC Epidemiology Unit at the University of Cambridge) concludes, "There was already good evidence in women that earlier puberty timing leads to higher risks for health outcomes later in life such as Type 2 diabetes, obesity and cardiovascular disease. We now show that the same is true in men. The next steps will be to understand how to prevent early puberty in boys and girls, possibly by reducing childhood overweight and obesity, or by other means."
Source : Nature Communications (http://dx.doi.org/10.1038/ncomms9842)

Sunday, November 8, 2015

Meat -- and how it's cooked -- may impact kidney cancer risk


A new study indicates that a meat-rich diet may increase the risk of developing kidney cancer through mechanisms related to particular cooking compounds. Also, these associations may be modified by genetic susceptibility to kidney cancer. Published early online in CANCER, a peer-reviewed journal of the American Cancer Society, the study illustrates how diet and genetics may interact to impact cancer risk.
The incidence of renal cell carcinoma (RCC), the most common form of kidney cancer in adults, has been increasing in the United States and other developed nations. Investigators suspect that factors related to a western lifestyle--such as a diet high in meats, processed foods, and starches--may play an important role in this trend. To investigate, a team led by Xifeng Wu, MD, PhD, of The University of Texas MD Anderson Cancer Center in Houston, studied the dietary intake and genetic risk factors of 659 patients newly diagnosed with RCC and 699 healthy controls.
The researchers found that kidney cancer patients consumed more red and white meat compared with cancer-free individuals. Also, cancer patients consumed more cancer-causing chemicals that are produced when meat is cooked at high temperatures or over an open flame (particularly pan-frying or barbequing). Finally, the investigators discovered that individuals with certain genetic variants were more susceptible to the harmful effects of these cancer-causing chemicals.
"Our study provides additional evidence for the role of red meat, white meat, and 2-amino-1-methyl-6-phenyl-imidazo(4,5-b)pyridine (PhIP) in RCC etiology and is the first study of dietary intake of mutagenic compounds and RCC risk to suggest an association with 2-amino-3,8-dimethylimidazo(4,5-f) quinoxaline (MeIQx), one of the most abundant heterocyclic amines commonly created in grilling, barbequing, and pan-frying meats at high temperatures," said Dr. Wu. "Also, our study is the first to evaluate the impact of RCC susceptibility variants, identified via genome-wide association studies, on the association between intake of mutagenic compounds and RCC risk."
While the study was small and limited to non-Hispanic whites, the findings suggest that reducing consumption of meat, especially when cooked at high temperatures or over an open flame, might serve as a public health intervention to reduce the risk of developing RCC. In addition, genetic testing might help to identify individuals at especially high risk.
Source : Journal CANCER (http://dx.doi.org/10.1002/cncr.29543)

Energy drink increases blood pressure, norepinephrine levels


Anna Svatikova, M.D., Ph.D., of the Mayo Clinic, Rochester, Minn., and colleagues randomly assigned 25 healthy volunteers (age 18 years or older) to consume a can (480 mL; 16 fl. oz.) of a commercially available energy drink (Rockstar; Rockstar Inc) and placebo drink within 5 minutes, in random order on 2 separate days, maximum 2 weeks apart. The placebo drink, selected to match the nutritional constituents of the energy drink, was similar in taste, texture, and color but lacked caffeine and other stimulants of the energy drink (240 mg of caffeine, 2,000 mg of taurine, and extracts of guarana seed, ginseng root, and milk thistle). This JAMA study is being released to coincide with its presentation at the American Heart Association's Scientific Sessions 2015.
Energy drink consumption has been associated with serious cardiovascular events, possibly related to caffeine and other stimulants. The researchers examined the effect of energy drink consumption on hemodynamic changes, such as blood pressure and heart rate. Participants were fasting and abstained from caffeine and alcohol 24 hours prior to each study day. Serum levels of caffeine, plasma glucose, and norepinephrine (noradrenaline) were measured and blood pressure and heart rate were obtained at baseline and 30 minutes after drink ingestion.
Caffeine levels remained unchanged after the placebo drink, but increased significantly after energy drink consumption. Consumption of the energy drink elicited a 6.2 percent increase in systolic blood pressure; diastolic blood pressure increased by 6.8 percent; average blood pressure increased after consumption of the energy drink by 6.4 percent. There was no significant difference in heart rate increase between the 2 groups. The average norepinephrine level increased from 150 pg/mL to 250 pg/mL after consumption of the energy drink and from 140 pg/mL to 179 pg/mL after placebo (change rate: 74 percent vs 31 percent, respectively).
"These acute hemodynamic and adrenergic changes may predispose to increased cardiovascular risk," the authors write. "Further research in larger studies is needed to assess whether the observed acute changes are likely to increase cardiovascular risk."

Research reveals main reasons why people go to work when ill


High job demands, stress and job insecurity are among the main reasons why people go to work when they are ill, according to new research by an academic at the University of East Anglia (UEA).
The study aims to improve understanding of the key causes of employees going to work when sick, known as presenteeism, and to help make managers more aware of the existence of the growing phenomenon, what triggers the behaviour and what can be done to improve employees' health and productivity.
A key finding of the study, published today in the Journal of Occupational Health Psychology, is that presenteeism not only stems from ill health and stress, but from raised motivation, for example high job satisfaction and a strong sense of commitment to the organisation. This may motivate people to 'go the extra-mile', causing them to work more intensively, even when sick.
One of the significant links to presenteeism is the severity of organisational policies used to monitor or reduce staff absence, such as strict trigger points for disciplinary action, job insecurity, limited paid sick leave, or few absence days allowed without a medical certificate.
Lead author Dr Mariella Miraglia, a lecturer in organisational behaviour at UEA's Norwich Business School, argues that presenteeism is associated with work and personal factors, not just medical conditions. Also, that these factors are more strongly related to, and so more able to predict, presenteeism than absenteeism.
In previous research presenteeism has been associated with both negative and positive effects on employee productivity and welfare, with contradictory causes and consequences for individuals and organisations. It has been linked to errors, lower performance, exacerbating health problems and affecting wellbeing, with more productivity loss than absenteeism. The Centre for Mental Health calculated that presenteeism from mental ill health alone costs the UK economy £15.1 billion a year.
"This study sheds light on the controversial act of presenteeism, uncovering both positive and negative underlying processes," said Dr Miraglia, who worked with Dr Gary Johns of Concordia University in Montreal, Canada. "It demonstrates that presenteeism is associated with work features and personal characteristics and not only dictated by medical conditions, in contrast to the main perspective of occupational medicine and epidemiology.
"Working while ill can compound the effects of the initial illness and result in negative job attitudes and withdrawal from work. However, the possible negative consequences of being absent can prompt employees to show up ill or to return to work when not totally recovered. Organisations may want to carefully review attendance policies for features which could decrease absence at the cost of increased presenteeism."
The research analysed data from 61 previous studies involving more than 175,960 participants, including the European Working Conditions Survey which sampled employees from 34 countries. Dr Miraglia developed an analytical model to identify the most significant causes of presenteeism and absenteeism, with work and personal characteristics relating differently to presenteeism depending on whether they followed a 'health impairment' or 'attitudinal/motivational' path.
Job demands, such as workload, understaffing, overtime and time pressure, along with difficulty of finding cover and personal financial difficulties, were found to be key reasons why people might not take a day off. Conflict between work and family, and vice versa, and being exposed to harassment, abuse, and discrimination at work were also positively related to presenteeism. This is because these negative experiences can exacerbate stress and harm health, requiring employees to choose between going to work and staying away.
Those who had a supportive work environment, for example supportive colleagues and a good relationship with managers, felt they did not have to go to work when ill, and were both more satisfied with their jobs and healthier. Optimism was linked to presenteesim, in that those with a positive outlook were more willing to carry on with their work while ill.
"Because presenteeism is more predictable than absenteeism, it is easy to modify by management actions," said Dr Miraglia. "Workplace wellness and health programmes may be desirable to reduce stress and work-related illness. Furthermore, although increasing job resources, such as job control and colleague, supervisor, and organisational support, can be helpful in tackling presenteeism through their positive impact on health, our results suggest that controlling job demands represents a key line of defence against the behaviour.
"Organisations may benefit from well-designed jobs that limit the level of demands to which employees are exposed to every day, for example by reducing excessive workload, time pressure and overtime work, as well as making sure they have the resources they need."
Dr Miraglia said further research was needed to understand when going to work while ill could be a "sustainable" and positive choice, for example in the case of a gradual recovery from long-term sickness, to improve self-esteem in the face of chronic illness or being an example of citizenship behaviour.
"It could be a good thing for some people, a way of integrating back into work again," she added. "But it would depend how much the individual and organisation wanted it and were prepared to be flexible, for example by modifying job descriptions or offering flexy time."
Source :  Journal of Occupational Health Psychology (http://dx.doi.org/10.1037/ocp0000015), University of East Anglia

© News of everything. All rights reserved. Distributed by Eezy Blogger

404Something Wrong!

The page you've requested can't be found. Why don't you browse around?

Take me back

Jago Desain

Mauris lacus dolor, ultricies vel sodales ac, egestas vel eros.

Popular Posts

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Test link