• September 25, 2021
 Op-Ed: You should have answered “Red” by Taylor Swift.

Op-Ed: You should have answered “Red” by Taylor Swift.

Are you on Facebook? Have you ever been asked to answer a question from one of your friends that goes something like this?

  • Without saying your age, name a TV show you watched as a kid?
  • Name an album that has no bad songs?
  • List a movie that you have watched at least ten times?
  • Name a song that has a color in the title?
  • Without saying your age, name a store you shopped at that no longer exists?

If you have answered any of those, congratulations!  You have added your information into the giant Facebook (and other social networks) algorithms that process what you do or do not see on their feed to you.

Here is a simple explanation of how it all works:

Say you decide to answer the question “Name a song that has a color in the title.”  If you answer the question with the 1967 Procol Harum hit “Whiter Shade of Pale” then you unknowingly have given a ton of information about yourself to the algorithm. 

With that single answer, the algorithm can take pretty good guess about your age, musical preferences, and probably your ethnicity, and even where you grew up, even down to the city. How does the algorithm know so much?

If you were listening to Procol Harum in 1967 (because you remember that song well enough to use it as an answer to a random question) the program can take a pretty good guess that you were a teenager or in your twenties in 1967. Say you were 15 in 1967, well the algorithm can calculate your age to be around 69 years old or thereabouts.

If you listed a TYPE of music, the program could then take a pretty good guess on your ethnicity based on several factors. The fact that you listed a fairly obscure all-white English rock group song instead of saying the equally obscure 1967 soul/jazz saxophone hit, “Blue Nocturne,” by King Curtis probably hints at your ethnicity. That would be “white” the program guesses.

Now enjoying a TYPE of music does not mean you belong to a specific ethnicity, however, the algorithm can take a guess, especially if you have answered other similar questions.

“A Whiter Shade of Pale” and “Blue Nocturne” are probably listened to by a totally different audience and were hits in totally different regions of the US (and around the world) when they were on the charts. Those regions have different ethnicities. The algorithms play a probability game, guessing, based on probability, all those little pieces of info about where and when you were listening to Procol Harum and where you weren’t listening to Blue Nocturne.

Because the algorithm now knows, probably, your age, your ethnicity, your musical interests (because you obviously are still listening to “A Whiter Shade of Pale” 54 years after it was released), along with 10 or 20 other questions that you have answered, each with it’s own specific data gathering reasons, it can now tailor your feed of information specifically for your tastes.

Based on your name, the algorithm can probably guess your gender, and the IP address of your computer already has told it what city you live it. (The algorithm can also take a guess on where you grew up, based on data that shows most people live their lives within 200 miles of where they were born.)

You may have even scrolled through the list of songs others had entered, and you may have paused on one or two others that jogged your memory. Those pauses, those mouse clicks, all are entered. Ah, not only do you like “Whiter Shade of Pale” you also clicked on “Black Magic Woman” by Santana. 1969. Hmm. More info about your tastes and your age.

The point is with what appears to be an innocent singular piece of information, Facebook, and any other algorithm-loving social network can pretty much paint a picture of your life which wouldn’t be too far off of your reality because each singular entry provides multiple data points to collect about you.

Yes, sometimes it is incorrect, but the news, the ads, the order of information that is given to you all is now based on the information that you gladly give to the programs and servers that run these social networks.

Ever wonder why you never get ads for condoms, Chevy Corvettes and Axe Body spray, but instead you are get ads for erectile dysfunction medications, hair loss shampoos, reverse mortgages and battery powered wheelchairs?

It is all because you answered “Whiter Shade of Pale.”  (You should have answered “Red,” by Taylor Swift or “Bodak Yellow,” by Cardi B. That would have confused them.)

It is no coincidence that these types of messages on Facebook and elsewhere arose almost simultaneously to the time that Apple allowed users of their iPhones to opt out of the ability of apps to secretly collect personal data about the users. These networks still need to feed their insatiable appetite for data about you.

Facebook and others also make a substantial amount of profit by selling that collected data to other organizations.

I was once in a presentation by the president of an online education remediation/testing company that explained to the audience that for every single question a student answered online, his company could collect at LEAST 50 pieces of data about that student.

This data included the time it took to answer the question, whether the student skipped and returned to a question, even whether the student hovered their mouse over a word and how long that “hover” took place. By the end of the 35 question assessment, the company had collected close to 2000 pieces of information about the student.

He said, that over the years, they had collected data on over 5 million students (about twice the population of Mississippi) and that their algorithms could predict with close to 99% accuracy the exact pathway that the students needed to be successful.  He presented all this data collection as benign, using the data to personalize follow up learning.

For instance, if the program collected data when a student hovered their mouse over the word “Function” then follow up lessons would include the word “Function” and follow up assessments would make sure to include it as well.

Yes, that is how digital ed tech companies can say that they can “personalize” learning for students: They collect massive amounts of data about students and then use that data to selectively program instruction for each student.

Is that bad? Is it good? I think it is neither good nor bad.  If the data produces the desired student outcomes, then perhaps it is a good use of data.

What needs to happen, and what schools and parents need to make sure of, is that student data stays inside the program and is not used or sold to outside sources because students, when answering questions in an online environment are no different than their parents answering questions on Facebook about albums with no bad songs and song titles with colors in them.

Author: Tim Holt

Holt is an educator and writer, with over 33 years experience in education and opines on education-related topics here and on his own award-winning blog: HoltThink. He values your feedback. Feel free to leave a comment, over at his site.  Read his previous columns here.

***

The El Paso Herald-Post welcomes guest columns, open letters, letters to the Editor and analysis pieces for publication, to submit a piece or for questions regarding guidelines, please email us at news@epheraldpost.com

Tim Holt

http://holtthink.tumblr.com/

Tim Holt is an educator and writer, with over 33 years experience in education and opines on education-related topics here and on his own award-winning blog: HoltThink. He values your feedback.

Related post

El Paso Herald Post Download the new ElPasoHeraldPost.com app and get notifications for community news, deals and community calendar info. Your info is never shared and we would love for you to share our app with your friends!
Dismiss
Allow Notifications