Excessive use of social media including Facebook, Snapchat, and Instagram is associated with poor well-being which could lead to depression and loneliness, researchers have warned. The study, published in the Journal of Social and Clinical Psychology, showed that limiting screen time on these apps could boost one’s wellness. “When you are not busy getting sucked into clickbait social media, you are actually spending more time on things that are more likely to make you feel better about your life,” said Melissa Hunt from the University of Pennsylvania in the US. For the study, researchers from the varsity, included 143 undergraduate participants. Also Read – Add new books to your shelfThe team designed their experiment to include the three platforms most popular with the participants. They collected objective usage data automatically tracked by mobile phones for active apps, not those running in the background, and asked respondents to complete a survey to determine mood and well-being. The participants were then randomly assigned to a control group, which had users maintain their typical social-media behaviour, or an experimental group that limited time on Facebook, Snapchat, and Instagram to 10 minutes per platform per day. In addition, the participants shared mobile phone battery screenshots for the next three weeks to give the researchers weekly tallies for each individual.
Last September, a complaint was filed against Google and other ad auction companies about a data breach that “affects virtually every user on the web”. This complaint was made by a host of privacy activists and browser makers, alleging that tech companies broadcasted people’s personal data to dozens of companies, without proper security through a mechanism of “behavioural ads”. The complaint stated that every time a person visits a website and is shown a “behavioural” ad on a website; intimate personal data describing each visitor and what they are watching online is captured and broadcast to tens or hundreds of companies. This was done in order to request potential advertisers’ bids for the attention of the specific individual visiting the website. The complaints were lodged by Jim Killock of the U.K.’s Open Rights Group, tech policy researcher Michael Veale of University College London, and Johnny Ryan of the pro-privacy browser firm Brave. They claimed that Google and other ad-tech firms were breaking the EU’s strict General Data Protection Regulation (GDPR) by unlawfully recording people’s sensitive characteristics. Now, new evidence has been released by the very same organizations that filed last September’s complaint, showing the data broadcasted includes information about people’s ethnicity, disabilities, sexual orientation and more. This sensitive information allows advertisers to specifically target incest, abuse victims, or those with eating disorders. The irony of it being, yesterday was ‘International Data Protection Day”. What is Behavioral advertising? Yahoo finance has explained the concept of behavioral advertising very simply. The online ad industry tracks a person’s movements around the internet and builds a profile based on what the individual looks at/ sites the user visits. On visiting a webpage that runs behavioral ads, an automated auction takes place between ad agencies with the winner allowed being to show the user an ad that supposedly matches their profile. This ultimately means that for the real-time bidding system to work, personal details of the users have to be broadcasted to the advertisers in so-called “bid requests”. Evidence against Google and IAB Joining the list of complainants is Poland’s Panoptykon Foundation, another rights group, that has complained to its local data protection authority about organizations including Google and the Interactive Advertising Bureau (IAB), which is the industry body that sets the rules for ad auctions. The evidence submitted by the complainants comprises category lists from Google and IAB, including topics such as being an incest victim, having cancer, having a substance-abuse problem, being into a certain kind of politics or adhering to a certain religion or sect. Special needs kids, endocrine and metabolic diseases, birth control, infertility, diabetes, Islam, Judaism, disabled sports, bankruptcy- these serve as supplementary evidence for the two original complaints filed with the UK’s ICO and the Irish DPC last year. A Google spokesperson told TechCrunch that the company has “strict policies that prohibit advertisers on our platforms from targeting individuals on the basis of sensitive categories” and that if they did find such ads violating said policies, they would take immediate action”. The original IAB lists can be downloaded as a spreadsheet. The PDF versions of the IAB lists with special category and sensitive data highlighted by the complainants can be viewed here (v1) and here (v2). You can go ahead and download Google’s original document for more insights on this news. Read Next French data regulator, CNIL imposes a fine of 50M euros against Google for failing to comply with GDPREuropean Consumer groups accuse Google of tracking its users’ location, calls it a breach of GDPRTwitter on the GDPR radar for refusing to provide a user his data due to ‘disproportionate effort’ involved
IEEE Standards Association (IEEE-SA) released the first version of Ethics guidelines for automation and Intelligent systems, titled “Ethically Aligned Design (EAD): A vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems”, earlier this week. EAD guidelines feature scientific analysis and resources, high-level principles as well as actionable recommendations for ethical implementation of autonomous and intelligent systems (A/IS). “We offer high-level General Principles in Ethically Aligned Design that we consider to be imperatives for creating and operating A/IS that further human values and ensure trustworthiness”, reads EAD. The EAD guideline explains eight high-level ethical principles that can be applied to all types of autonomous and intelligent systems (A/IS), irrespective of whether they are physical robots, software systems or algorithmic chatbots. Eight General Principles in EAD Human Rights As mentioned in EAD, A/IS shall be created and operated in such a way that it respects, promotes, and protects internationally the recognized human rights. These rights should be fully taken into consideration by individuals, companies, research institutions, and governments to reflect the principle that A/IS respects and fulfills the human rights, freedoms, human dignity, and cultural diversity. Well-being EAD states that A/IS creators should focus on improving human well-being as a primary success criterion for development. EAD recommends that A/IS should prioritize human well-being as the outcome in all system designs. It should use the best available and widely accepted “well-being metrics” as their reference point. Data Agency A/IS creators should put more emphasis on empowering individuals with an added ability to access and securely share their data. A/IS creators should focus on maintaining people’s capacity to have control over their identity. Organizations and governments, should test and implement technologies that allow the individuals to specify their online agent for case-by-case authorization decisions. For minors, current guardianship approaches should be implemented to determine their suitability in this context. Effectiveness Creators should provide evidence of the effectiveness and fitness for the purpose of A/IS. EAD recommends that creators engaged in the development of A/IS should focus on defining the metrics to serve as valid and meaningful gauges of the effectiveness of the system. Creators of A/IS should design systems where the metrics on specific deployments of the system can be aggregated to deliver information on the effectiveness of the system across different deployments. Also, industry associations and other organizations (IEEE and ISO) should collaborate to develop standards for reporting on the effectiveness of A/IS. Transparency EAD states that the basis of a particular A/IS decision should always be discoverable. It recommends that new standards should be developed in a way that it describes measurable and testable levels of transparency. Also, these standards would offer designers with a guide for self-assessing transparency during development and suggest mechanisms for improving transparency. Accountability As per EAD, A/IS should be created and operated in a way so that it offers an “unambiguous rationale” for decisions made. EAD states that in order to address the issues of responsibility and Accountability, courts should clarify the “responsibility, culpability, liability, and accountability” for A/IS prior to the development and deployment. It also states that designers and developers of A/IS should be made aware of the diversity in existing cultural norms among these A/IS. Awareness of Misuse EAD states that creators should offer protection against all potential misuses and risks of A/IS in operation. EAD recommends that creators should be made aware of methods of misuse. It also states that A/IS should be designed in ways that can minimize the opportunity for these systems. Public awareness should be improved surrounding the issues of potential A/IS technology misuse. Competence EAD states that the creators should specify and operators should adhere to the knowledge and skill required for safe operation. It also mentions that the creators of A/IS should clearly specify the types and levels of knowledge required to understand and operate any given application of A/IS. Also, creators of A/IS should provide the affected parties with information on the role of the operator and the implications of operator error. Rich and detailed documentation should be made accessible to the experts and the general public. For more information, check out the official Ethically Aligned Design guidelines Read Next IEEE Computer Society predicts top ten tech trends for 2019: assisted transportation, chatbots, and deep learning accelerators among others What the IEEE 2018 programming languages survey reveals to us 2019 Deloitte tech trends predictions: AI-fueled firms, NoOps, DevSecOps, intelligent interfaces, and more