Authors Posts by Noreen M.Gulwani

Noreen M.Gulwani

Noreen has had a long love affair with creative writing, literature and poetry. With a degree in Journalism, and a thirst for travel and knowledge, she now distributes her time between these.

U.S Airforce Unmanned Plane Returns After Two Years Long Secret Mission

The U.S. Airforce's X-37B Orbital Test Vehicle mission 4 after landing at NASA's Kennedy Space Center Shuttle Landing Facility in Cape Canaveral, Florida, U.S., May 7, 2017. U.S. Air Force/Handout via REUTERS

Source : May 7, 2017. U.S. Air Force/Handout via REUTERS
The X-37B space plane owned by the United States military returned to NASA’s Kennedy Space Centre on Sunday in Florida after completion of its top secret mission of two years as reported by the Air Force.

With the looks of a space shuttle, the unmanned plane landed at 7: 47 am on the runway which was previously used for space shuttle landings that have long been suspended.

It was in 2015 when the space plane built by Boeing fired off from the nearby Cape Canaveral Air Force Station, attached to the United Launch Alliance’s Atlas 5 rocket, as a result of a partnership between Boeing Co and Lockheed Martin Corp.

Being one of the two shuttles from the Air Force fleet, the X-37B carried out undisclosed experiments for over a span of 700 days in the orbit. This made its journey the lengthiest venture of the very secretive program and fourth in line of the missions handled by Airforce Rapid Capabilities Office.

Risk reduction, operational concept development for reusable space vehicle technologies and other experimentations were carried out by orbiters, revealed the Air Force without any other details about any of the procedures. Even the cost of the program was not shared as it was labeled classified.

A nonprofit group encouraging peaceful and nonviolent exploration of space, The Secure World Foundation remarked at the secrecy of the mission with suggestions that there could be some hardware aboard the craft which is under evaluation or testing for intelligence or something related to it.

The planes are 9 meters (29 feet) long with a wingspan of 15 feet which makes them around a quarter of the dimensions of the retired shuttles from NASA.

Also known as the Orbital Test Vehicle (OTV), the X-37B first blasted off in 2010 and was in action for 8 months. Then a second round was launched in 2011’s March which returned after 15 months of flying. The same year in December 2012 the third mission jetted off and returned after 22 months.

The landing that took place on Sunday was the first for Florida as all the last three landings had taken place in California’s Vadenberg Airforce Base. It was in 2014 when the Air Force relocated the program to the previously known shuttle hangars for NASA.

Later this year, the Air Force reported, will be the launch of their fifth mission for the X-37B from the Cape Canaveral Air Force station which is located south of the Kennedy Space Centre.

Odor Sensors; Sniffing out the Disease in Us with Artificial Intelligence


Every human has their own unique odor made up of uncountable organic compounds. These compounds show our internal composition and reveal to people the details of who we are, our age, genetics, lifestyle and the underlying metabolic processes of our bodies.

Years before modern medicine, ancient Chinese and Greek physicians used the scent of a patient to diagnose them. Today, modern research confirms too that our body’s fluids, skin and breath can tell us a lot about an illness. It is said by experts that the breath of a patient with diabetes will smell like rotten apples whereas the patients with typhoid have skin that smells like baking bread.

But where doctors and cancer sniffing dogs are not always enough, there are other options that scientists have been working on for decades that are affordable, reliable and provide noninvasive diagnosis, like odor sensors.

It finally looks like their efforts will wage well. A manufacturer of chemical sensors from England, Billy Boyle said that all technologies are coming together so he expects that soon large scale medical studies will be run to obtain the data needed to prove the efficacy of odor analysis.

President and Co-founder at Owlstone and an electronics engineer, Boyle created his company with two of his friends in 2004 to make sensors that detect explosives and chemical weapons for clients that included the government of the United States. However, when in 2012 Mr. Boyle’s spouse Kate Gross got diagnosed with cancer, his focus shifted towards medical sensors, prominently on cancer detection.

His wife, Ms. Gross died in 2014 and according to Boyle, a big motivation for him is that if her disease had been detected earlier she could have still been alive.

The company Owlstone generated $23.5 million to give its technology of odor analysis to doctors dealing with patients.  With that, the National Health Service in Britain is investing in a clinical trial of 3000 subjects to test the company’s sensors to identify lung cancer.

Their sensor, which looks like a sim card and works like a chemical filter, has a silicon chip with layers of metal and small electrodes made of gold.

The odor sample’s molecules are ionized firstly and then electric currents move the chemicals of diagnostic interest within the channels of the chip where they can be diagnosed.

Boyle added that what needs to be sniffed out can be changed through the software enabling them to use the sensor for trials on colorectal cancer while their partners can work on conditions like the irritable bowel disease.

They are also conducting a trial of 1400 people along with the University of Warwick to diagnose colon cancer through urine samples and testing chips to check if they can help doctors determine the best medications for asthma sufferers by going through their breath molecules.

Another chemical engineer who was touched by cancer, Hossam Haick, is also working on a similar technology in Israel.

His college friend’s leukemia made him want to test whether a sensor would help in the treatment but he soon realized that an early diagnosis was as important as the treatment itself.

Haick’s sensors are made of gold nanoparticles and carbon nanotubes with ligand coating and molecular receptors which tend to detect biomarkers of illnesses found in the breath exhaled.

When these biomarkers stick to ligands the nanoparticles and tubes shrink or swell affecting the time it takes the electrical charge to flow between them. This loss or gain in conductivity is then deciphered to diagnose disease.

The signals are sent to a computer where they decode the scent to a signature which then connects it to the illness it was exposed to.

Haick stated that with artificial intelligence the machines are getting better at identifying with each exposure. Instead of recognizing specific compounds which propose the diseases, Haick’s machines smell the overall chemical composition that creates an odor.

Last December Haick and his team published their research in ACS Nano which revealed that his AI-powered non-array was able to differentiate between 17 diverse illnesses with a precision of 86%.

Teams in the US also recently were given an $815,000 grant a few months ago from the Kleberg Foundation to work on an odor sensor which would spot ovarian cancer in blood plasma. Their sensors use snippets of DNA to stick on to the odor particles.

Along with these teams, many others are working in Austria, Japan, and Switzerland to develop similar sensors for odors to diagnose illnesses.

While all the different scientists are in competition, their goal of a breakthrough is full of life-saving possibilities across the entire disease spectrum. And industry specialists believe that in about 5 years the tools should be ready for medical practitioners to use in their clinics and hospitals.

Study Links Scorching Dry Summers to Climate Change


The past winter season was quite strange all over the states. California was showered with rain whereas unexpected warmth was felt all over the Eastern seaside and the Midwest. In NY, sales of shovels and salt spiked and the cherry blossoms in Washington bloomed and died off too soon.
Many people associate weird weather changes today to climate change but as many people in America also are having a hard time understanding this, partly due to the decades of misinformation that our TV journalists have been giving about the disconnect between climate and weather. We have been told that weather and climate are not related, and while one is specific, the other is a general probability, leading us to believe that weather changes cannot be caused by the climate change.

However this claim wouldn’t work anymore. Recently, the nonprofit organization Climate Central released a report in The New York Times that global warming was behind the heat wave and the early onset of warm weather.

Today, having warm weather in February is as expected and likely to happen 3 times more than it was in the 1900s, revealed the organization’s World Weather Attribution Team.

The method behind the findings is called rapid attribution which works by equating observed meteorological information alongside data from climate representations. Some important findings were discovered when a study published in the Proceedings of the National Academy of Sciences had tried to find, through a new standardized approach, whether individual weather events are related to the global warming caused by humans.

The most important of those findings were:

In 97 percent of the observed area, the hottest day of summer and the hottest month of summer are getting hotter due to climate change caused by humans. A new record for the hottest month temperature is four times more probable now in tropical areas than it would be. That also makes tropics to be at a double risk to experience the driest year in their record.

An author of the said paper and a climate scientist at the University of California, Daniel Swain commented on the results saying that if there is a heat wave now, anywhere, it’s likely to be linked to human actions.

Such a large scale rapid attribution study was a first of its kind and the researchers remarked how it allowed them to understand the way climate change was impacting temperature from way above than ever before.

Their team stated that their goal is to show the public that the scientific circles have successfully developed tools that explain individual events. Along with that, they want to be able to provide an answer to people when they are shocked after the occurrence of an extreme event.

The method isn’t absolute perfection as according to Swain it’s still hard to point an extreme event to climate change when the event is complex, like the recent drought in California.

However, beyond the work on connecting causes to events, the study’s more permanent contribution will be operational as it developed a solid process to link weather and climate change.

To describe things in a simple way the method outlined proposed scientists to firstly recognize whether there is a core trend in the past weather data for a particular locality. At that point, they should decide how much that trend added to the extremity of any one weather incident. Lastly, they should equate that degree with two climate-model tests: if a changed climate, abundant like ours in atmospheric carbon dioxide, would yield an equally extreme trend; and whether a traditionally “normal” climate, parallel to that of 1850 or 1900, would.

However one of the limitations of the paper was that it only used one model to test if those trends were driven by climate, the Community Earth System Model from the National Center for Atmospheric Research. Author Swain recognizes that they need to bring together more models in the process.

As the method needs a massive collection of data, it works only in localities where there are decades long of temperatures recorded. Thus it looks at densely populated and historically established areas like those in North America, Europe, Asia and South American coasts and does not address any recent weather trends in Africa and over much of the Oceans.

For the densely populated portions of the earth, and surely for the United States, it’s probable to visualize a world where estimations of climate-changed-ness become a part of our meteorological scene. Particularly if machine-learning algorithms carry on improving our estimation process of the outputs of a climate model.

20,000 men hired to create the Chinese Wikipedia


In competition to Wikipedia, China is planning to launch its own online encyclopedia next year.

Chinese officials revealed that more than 20,000 people have already been hired to create the project which will contain 300,000 records of about a 1000 words each.

However, unlike Wikipedia, this source will be developed by selected state-run university’s scholars instead of being editable by writers openly.

While Wikipedia is available in China, some of its content is blocked.

Yang Muzhi who is the chairperson of the Books and Periodical Distribution Association of China and editor-in-chief of the project, revealed earlier in April that this Chinese Encyclopedia is not a book but ‘a great wall of culture’.

He stated that the encyclopedia wishes to promote the nation’s scientific and technological progresses, uphold historical legacy, and fortify the essential values of socialism.

With a staggering 720 million internet users, China’s online population is the largest in the world.

As Muzhi listed Wikipedia as a competitor, he also added that the country was facing pressure from international bodies to create its own platform that can assist in guiding the public and society.

It was in 1993 that the Encyclopedia of China was initially published on paper by scholars whereas its second edition was released in 2012.

However, its critics claimed that the works which were funded by the government omitted and distorted some of the entries due to political reasons.

Although the notion for an online version of the site was accepted in 2011, work for it began just recently.

The release of this Chinese encyclopedia will put the government in heads-on competition with local online encyclopedias like Baidu and Qihu 360 along with the largest platform that is Wikipedia.

Today Chinese users can read content on Wikipedia given that it’s not related to sensitive topics which are entirely blocked for example about the Dalai Lama or President Xi Jinping.

A researcher at the Oxford Internet Institute Taha Yasseri explained that driven by the need for information the Chinese public currently accesses Wikipedia through using anti-filtering methods which is something that the authoritarian state does not like.

Hence their new efforts in a state owned and controlled platform is a way for the government to direct users towards their state-controlled content.

However a colleague of Yasseri, Joss Wright added that the platform can work to offer a more exclusively local experience that Chinese users often like.

Using the word ‘bewitching’ Mr. Yang described the influence and appeal of Wikipedia amongst the Chinese masses, to a newspaper last year.

However, he added to that with the claim that for their encyclopedia, they have the largest and the best-qualified team of authors in the world.

He stated that their goal was not to catch up to Wikipedia but rather to ‘overtake’ it.

Other than China and its censorship, just last week Turkey imposed a full ban to the website relating it to the unstable political environment of the country after the failed coup attempt last year.

Mars’ Soil Pressed into Bricks Will Make Construction on the Planet Much Easier


Photo by David Baillot, materials processed by Brian J. Chow and Yu Qiao

A great news for all space travel enthusiasts just came out; soil just like mars can be put together in a solid brick form without the need of any extra materials or additives to put it together. This means that Mars’ real soil too, will probably behave the same way.

This is a huge deal because it will make construction on the red planet a lot easier, be it living structures or facilities to supplement intergalactic travel. By making building structures easier, this discovery means that at least one of the biggest complications of human stay on Mars became a little less complicated.

This major discovery was done by a group of engineers who hammered and mashed together a material called the “Mars soil simulant.” This soil is a collection of earth rocks that have the exact chemical makeup as that of Mars and the shape and size of its grains are also similar to Martian grains.

The engineers after working with the material for a while were lucky to find that just through the application of the right amount of pressure, they were able to form tiny but stiff brick blocks, stronger than steel infused concrete.

This is new to us earthlings because when we do our construction, we combine our materials with a binder or a special adhesive for it to stay put together securely. The binder acts like a glue sticking everything together firmly and holding it in one place.

However this simulated Martian dirt is special, it has its own innate material that acts just like glue. When pressurized and compacted, these special particles give strength to the soil, revealed Yu Qiao, the lead researcher of the technique of the NASA-funded study, to the Verge Magazine.

While the soil is just a stimulant, its characteristics may not necessarily translate to those of Mars, but it increases the chances of it doing so, and if it does then that would be the best news for those dreaming of visiting the planet in their lifetime, of course after Musk’s confirmed intention of taking them there.

This is also because a lot of experts already agree that the future astronauts who will probably be the first humans on Mars will need to use Mars’ own resources if they ever want to create something there. Humans cannot afford to, neither will have the energy to, take all that they need on Mars, from earth. Other than being expensive, it will also be an extremely complicated journey, and taking building material there would make the permanent settlement plan on Mars very unviable.

Astronauts who would plan to live on Mars will need to live off the land, no matter how barren and infertile it is as it can still be used for other purposes. According to the findings of this research published in the Scientific Reports Journal, the buildings and structures made with Mars soil instead of earth soil are actually going to require much less effort.

Before this study, Qiao’s team at NASA was trying to find ways to make building materials with lunar soil and employ some technique to make it stay securely together. This was back when the space agency had still intended to go back to the moon. The lunar soil needed a binder, and the plan was to use as little as possible so that only the minimum amount needs to be shipped there.

Then in 2010, the agency turned towards Mars and so did Qiao’s team. They started experimenting with Mars simulant soil in the same manner as the lunar material, which is with a binder. When that worked well they just tried pushing the limit of that binder’s requirement lower and lower each time and ended up discovering that the Martian soil bonded perfectly well all on its own.

The researchers knew that it was something in the material of the soil itself which allowed it to behave that way. Turned out it was iron oxide, a chemical compound that gives the planet its signature red hue. This compound, when crushed, cracks easy and upon pressing it forms super strong bonds with other iron oxide compounds.

Qiao sees this miraculous material being used to make landing pads for ships that will land on the planet and habitats for people. According to him, the best method to employ to make this soil function optimally would be to slowly layer it, in a similar way to how a 3D printer layers.


Trump is Working His Way to Cut Down Corporate Tax Rates


Yesterday evening, officials revealed that President Donald Trump is planning to rip out the corporate income tax rate and give a high tax reduction to multinational corporations on their overseas profits which are brought in the U.S.

As the financial markets await the tax plan from the White House, said an official administrator, Trump is expected to issue a steep cut in the highest rate on pass-through companies from 15 percent to 39.6 percent, which also includes sole proprietorships and small businesses.

Further, it was revealed by the official that he will make a proposition of cutting the income tax from 15 percent to 35 percent which was payable by public corporations. The proposal will also allow multinationals to pull in their overseas profit at a much lower tax rate of 10 percent compared to the earlier 35 percent.

The said proposal is not predicted not contain any provocative “border-adjustment” taxes on imports that were earlier a part of the other proposals, which the republicans had circulated in the House of Representatives, aimed to address the losses on revenue which were caused by tax cuts.

Trump’s tax scheme will look small against the kind of in-depth tax alteration that Republicans have been discussing and will serve primarily as a guide for lawmakers in the Senate and the House.

The proposal is not likely to include any schemes for creating new revenue to counterbalance that loss by tax cuts, and so, if ratified, it would possibly supplement the federal deficit by billions of dollars.

Trump’s Treasury Secretary Steve Mnuchin and Gary Cohn, the director of National Economic Council were sent on Tuesday to Capitol Hill for a briefing with lawmakers on the said plan, the details of which will be released later to the public.

Mnuchin is taking the administration’s effort forward, of designing a tax package that can win support in the Congress, yet it will be a long time before the plans become law, although both the House and the Senate are controlled by the Republicans.

According to him, the cuts will pay back the money by creating high economic growth but deficit hawks, from both parties including Trump’s Republican Party, are likely to interrogate these projections.

It may also happen that Trump puts a cap on the individual top tax rate at 33 percent, cuts taxes for the middle class and revoke the estate and alternative minimum taxes.

It’s still not confirmed whether the President will include clauses that would attract votes from the Democrats like the proposal for funding infrastructures or Ivanka Trump’s proposed child-care tax credit.

Meeting at Capitol Hill

Reports from the meeting attended by the two veterans of investment bank Goldman Sachs, Cohn and Mnuchin, were described as productive and positive and upon leaving the Capitol, Mnuchin informed the reporters that there is no doubt that the Republicans and Trump government agree on the basic elements of the Tax reform.

White House’s senior official also remarked that Trump wants the Congress to pass the tax law reforms by the middle of autumn.

Washington policy analysts commented that this White House plan could be in some ways at odds with a larger tax proposal which was formed many months ago by House republicans. This could thwart the consensus building that is a requirement for a full tax reform, a political achievement that has not been possible since the government of President Ronald Regan in 1986.

How Princeton Researchers Dug Out the AI bias from the Roots of Language

Forward Woman Artificial Intelligence Robot

We already know that machine learning or “deep learning” is coming up with a lot of inherent bias that it learns from the data set which it has been given. One such example of it was an AI judged beauty contest conducted by Beauty.Ai which announced most of its visitors as white, with only 6 Asians and 1 one dark skin colored winner, even though people from India, China, Africa, and the U.S participated.

While some may argue that the results were in accordance with the number of representatives from each race meaning there were more white participants hence more white winners, that was not how it was supposed to be. The contest was to judge one’s attractiveness based on face structure and features, wrinkles and the face’s age against the actual age of the person and symmetry. The algorithms were not to judge skin color, but it still did.

Another example was Tay Microsoft’s Millennial mimicking Chatbot which went all racist, sexist and neo-Nazi on twitter tweeting racists slurs, and abuse to people, wishing hell burn to feminists and outright saying “Hitler was right I hate the Jews”. Microsoft just didn’t have to turn off this bot but it actually deleted many of these offensive tweets.

These examples make the case for how our AI creations have fallen prey to our own prejudices. Now researchers at Princeton have come up with a calculated cause for it in their study to help us understand our future assistants or overlords, better. They have also developed an algorithm that can predict human bias based on their in-depth analysis of our usage of the English Language, online.

AI systems were trained to figure out human language from a huge body of such collections known as the Common Crawl.  The Common Crawl is the result of a massive go through of the internet done in 2014 which has more than 840 million words. Researcher Aylin Caliskan and her team at the Princeton Centre for Information Technology speculated whether the Crawl has biases; which an algorithm could figure out, after all, it was all words which millions of people all over the world had typed.

Therefore the team tried out the Implicit Association Test (IAT) which is usually used to calculate the unconscious social attitudes of people. The test is designed to check people’s inherent associations with any word, for example, many people associate women with family, and men with work, and that comes under a bias. There is also time involved in measuring the extent of that bias which comes from the amount of time it takes for the subject doing the test to form associations. The longer an answer takes the weaker the association is. The test is often useful in uncovering people’s stereotypes about the world around them.

With IAT as their model, Princeton scientists created another test called the Word Embedding Association Test (WEAT) which checks which concepts in a text that are more strongly associated than others, as that’s how AI gives words their meanings. They carried out the “word embedding part” by using Stanford’s GloVe project that has categorized words of similar attributes or associations into packages. So, for example, a package of the word ‘girl’ would also come with the words female, woman, lady or mother. The point is to bring together all similar concepts for easy study.

Through the WEAT, the Princeton team evaluated texts after texts which the AI machines were given to see how different words get associated online by looking at their closeness in text. The closer to each other, or how many words nearer a pair is, the more they are associated with each other. With that, the WEAT also takes into account the frequency of such associations. Like in IAT, the extent association was the time taken to answer, in WEAT that is replaced by the closeness and frequency of the two concepts with each other in a written text.

By transforming the test to determine the biases in machine learning, the researchers were able to demonstrate, not just speculate, that when human data is fed to AI, they really do learn our biases too, from the information that it extracted from the language. This bias fed AI then behaves so in the action that it takes.

The experiment done after the test resulted in showing that AI translators illustrated that bias when it translated a sentence of Turkish into English showing biasness in race, gender, and other sensitive matters. When asked to translate ‘He/She/IT is a doctor’ the computer bot chose ‘He’ but had no difficulty in choosing ‘She’ when the word doctor was replaced with the word ‘nurse’.

Casliskan and her team also revealed that female names were associated more with family terms while the male names were associated with more career terms. The authors of the study said that there were usually two ways that an AI machine can learn biases, one; from its creators which are the programmers themselves and another; from the data that is fed into it which is the collective existing social narrative of everybody writing anything online.  Such biases, they said, are built into the language itself.

Through their test and experiment, the team was able to point to the language as the reason behind most of the prejudices that our machines learn. The researchers elaborated how through words, we and our machines, are coming up with meanings and inferring ideas, without actually experiencing the world. Whether these are good things or bad, whether they are true or not, people and intelligent computers are absorbing these meanings from how any word is used, and then is reusing it back in the world.

The study also revealed some inherent truths that the language has, like its earlier association of women and nursing because as it was exercising a stereotype, it was also backed by facts that say that there are more women in the nursing profession than there are men. So in that way, the machine showed the reality.

The team reported that while language does contain a lot of bias, it also reflects the world. Hence they believe that the problem with human language and its bias is humans themselves. The way we speak is not going to change, hence the researchers think that rather than our AI making decisions for us, we would probably need a human as the last gatekeeper to approve those decisions because only humans know their biases the best.

The evolution of AI may automate and take over a lot of our current jobs, but it also means that it would create newer jobs which will require preceding over and supervising the artificial intelligence and its performance. Human are the only ones that can know where the machine might be acting on its prejudice and so it’s the sentient race that can make sure that our machines do not carry out bias acts.

This research can be beneficial in the future as well as the WEAT model can be used to detect prejudice and biasness in other cultures, languages, and localities. Instead of testing humans, which is much more expensive, time-consuming and effort-some, sociologists, and other scientists can turn towards machines and give them a text written by humans to check the prejudices that it may contain.

The paper can be found here.

Erdogan More Powerful Than Ever; Turkey Says 1000 “Secret Imams” Caught


Turkish law enforcement says it has arrested more than a 1000 individuals on 26th April who were reported to have secretly penetrated the police throughout the country on the behalf of a U.S based cleric that the Turkish government has blamed for the failed coup effort in July last year.

This country-wide action against the suspected followers of the clergyman Fethullah Gulen was amongst the largest of all operations since many months. Gulen, who was a former confederate of Turkish President Tayyip Erdogan is now under the government’s accusation of trying to overthrow Erdogan by force.

Turkish interior minister Suleyman Soylu informed that the overnight onslaught was done at a targeted Gulen Network called the ‘secret imams’ which had infiltrated the local police infantry.

So far, from the ongoing operation, a thousand and nine such imams have been arrested in over 72 provinces, said the minister to the reporters in the capital Ankara.

Following the failed coup of July, Turkish authorities detained more than 40,000 people and suspended or fired around 120,000 workers from a vast array of professions, from policemen and soldiers to teachers and public servants, who had alleged unconfirmed links with terrorist organizations.

These new arrests come only over a week after when no more than just enough of Turkish voters supported the proposition of expanding President Erdogan’s already extensive powers in a referendum, which according to observers was smeared with discrepancies.

Now with Executive powers, Erdogan and Prime Minister Yildirim plan to put the senate through an overhaul, a power that no elected official of the state ever had.

These new powers will also lead to the elimination of the Prime Minister’s office by 2019, making the President the supreme authority of all matters in the country.

Through this referendum, Erdogan is also to rejoin the Justice and Development Party that is currently headed by PM Yildirim, which will formally put him in the commanding position of a nation that is the largest economy of the Middle East.

The vote intensely divided the country and Erdogan’s critics are worried of a further drift from such authoritarianism from him, whom they see as someone resolute on disintegrating the democratic and secular structure of modern Turkey.

Instead, Erdogan refutes this by claiming that giving more power to the presidency will curb instability which often accompanies a coalition government. This, he argues, is especially important now at a time when the country faces a number of different problems which include threats from Kurdish and Islamist militants.

President reminded the reporters on Tuesday that the objective of the attempted coup was to topple the authorities and destroy the state.

He remarked that the government is working towards cleansing the armed forces, police and the judiciary system from FETO members, an acronym for the Gulenist Terrorist Organization, used by the government for Gulen’s supporters.

Erdogan paralleled the fight against Gulen with the state’s war against ISIS and the Kurdish PKK fighters, both of which are considered terrorist organizations by the United States, the European Union, and Turkey.

He also added that the government will continue its endless efforts to keep up the values and liberties of the Turkish people which are part of a Democratic nation. However, he clarified that they will also be very committed to continue their fight against FETO, PKK and other terrorist groups like Daesh.

Immediately after the coup, many Turks supported the mass suspensions of the workers as they agreed with Erdogan when he accused Gulen of carrying out the entire revolt which resulted in the death of 240 people, most of whom were civilians. But after the expansion of the arrests, criticism increased.

However, from families of those who were sacked or detained from July onwards say that those people did not have any input or relation with the armed attempt which was aimed at tipping over the government. Instead, it was a strategy of the government to strengthen Erdogan’s control over the country and they are just victims of that.

Amazon’s New Echo Look; A Sassy Friend or a Spying Marketer in Disguise?


image source:

Amazon just revealed the improved home assistant Echo Look, much similar to the previous Echo but with a camera installed in it which can take your pictures, videos and even judge your outfit, like a personal stylist.

The $199 voice controlled device will listen to your commands and use artificial intelligence and online data of fashion trends and your body type from your pictures to decipher whether those skinny jeans make you look fat or not.

To give you the right fashion advice, the Echo look comes with a built-in app called StyleCheck which can make your fashion choices for you, saving you the time and the stress that you despise every morning. Based on the time of the day and the weather outside, the app will pick out the most appropriate dressing options that you can wear. You can also let the smart software know the location you want to dress for, and it will follow suit.

With its new found vision and a fashion forward application, Alexa is supposed to see you dress and undress and dress again until you find the best look of the day. You can keep her in your walk-in closet, your bedroom or even your bathroom, if you really like your privacy invaded.

If you aren’t sure whether you look on point and you don’t just want to go with a robot’s style advice you can also ask Alexa to share those picture to your friends who can then let you know what they think.

The Echo look may seem harmless and even exciting to those that mull over the gadget world awaiting one new toy after the other, but to many it raises various questions regarding security and privacy of such smart home devices.

A few months ago in one such case, a murder trial still underway, Alexa was to be the prime witness but Amazon was adamant on releasing the data regarding voice recordings and commands that were taken by the device on the night of the murder and at the murder location, which was the owner’s house. Amazon resisted in sharing the data of its client under the privacy protection of the first amendment, until the client himself agreed later.

This case raised much concern over what such devices actually record and save and that did we, the consumers, succumb to our basic right of privacy from everyone, including the government and corporations, by installing such gadgets in our homes.

The photos, audio or video taken by the device are indefinitely stored the AWS cloud and in the Echo app until the user manually deletes them. When the MotherBoard asked Amazon whether all this user data will be sold to third parties, the company did not answer the question.

In addition to that, the algorithm for Amazon’s fashion checker is mostly unknown at the moment creating apprehension regarding just how much faith can we put in Alexa’s style sense. Just like us, our creation AI also comes with biases in judgment proven only recently again by the results of an AI judged beauty contest which mainly had all white winners.

Now, not only can Amazon invariably collect huge amounts of data from its users about their style and shopping habits but it can also use Echo’s powerful machine learning capabilities to sell us clothes, accessories, and other style items that it almost knows we will buy because it will suggest them to us!

Lastly through face detection and full-length photos, as techno sociologist Zeynep Tufeci pointed out, the device may be able to tell when you are happy, sad or even pregnant, along with a lot of other data about your physical, social and sexual self, which you did not disclose.

Amazon’s algorithm can infer so many ideas about us which we do not even realize. It can sell you more make up when you are sad, exercise equipment when you feel fat and so many other things beyond our small imagination. All Echo onlookers wait to know what exactly Amazon will be doing with all this collected data.

The Science We Do is becoming More Unreliable; Warns Study


The institutional and commercial pressure on scientists and researchers to publish more and more papers in the top most journals has been affecting the credibility of science, according to the concerned members of the scientific community. A new experiment now was able to actually demonstrate the deterioration.

Researchers in the US wanted to simulate the result that comes about when scientists compete for professional prestige and funding so they created a computer model which highlighted how scientists are under immense stress to publish sensational results.

Developed by the researchers at the University of California, the model was input with simulated lab groups which were honest, did not play tricks or intentionally manipulated their results. They were however given bigger rewards if they successfully put out “novel” findings, just like in reality. The simulated groups also were encouraged to spend more effort in conducting their procedures rigorously as that would improve the quality of their study, although reduce their output in quantity.

The outcome observed, revealed by Paul Smaldino, the head researcher on the website The Conversation, was that over the time efforts were down to their minimum while false discoveries had exponentially increased.

Additionally, the model proposed that the scientists who employ shortcuts to obtain the incentives will probably be passing the same working approaches to the upcoming generation of scientists who work with them in their lab. This will create an effect that the study was named after, called “the natural selection of bad science.”

Smaldino, when speaking to The Guardian, pointed out that until publishing, surprising results in high-profile journals is highly rewarded; above the subtle nuanced differences which are pointed by studies, till then dodgy methods of maximizing such results will be widespread.

While it wasn’t the first time that news of such a problem has come up it’s probably the only time that numbers were run in this manner through a computer simulation done like this for a study.

A current concerning phenomenon that science is going through is called the reproducibility crisis which highlights how there is an emergence of weak discoveries that are hard to reproduce but still get visibility because of their shocking or new nature. Such studies are hindrances in scientific data yet they grab the attention of reporters and media.
Journals and the mainstream media prefers such studies due to their shock factor and novelty, but they carry a real risk which is damaging the integrity of science, particularly because of the pressure that scientists feel to twist and turn their papers in efforts to make such impressions. However, it’s a never ending cycle as when such attention-grabbing studies are published they attract more grants and funding from various institutions for the researchers who can then conduct more research.

The case is that this evolution of the trend of substandard science actually requires no intentional planning, dishonesty or hoodwinking from individual scientists making it quite easy to actually just be applied. This is not to say that all scientists and researchers have left strict methods and scientific validity but that if institutions keep rewarding shock science above in-depth outcomes then “bad science” will proliferate unleashed.

This is actually enhanced by other quantifiable measures that check the importance of papers and their researchers. One such measure is the P value which can be exploited and be misleading causing all kinds of wrong impressions and harming science.

Rating scientists on their sales figures is not the right way to go if we don’t want our scientists to strive to reach targets like salesmen do. The hard but necessary solution for this conundrum must be developed at an institutional level and a move must be made from judging the quantity of a scientist’s work to employing practices that encourage quality.

In their paper, the researchers emphasized that the long-term cost of using such simplistic quantifying methods will be hefty. To make certain that the science we do today is both reproducible and meaningful, organizations must encourage and reward that kind of work.