Regulation - Too Much or Too Little?

The odds are excellent that you will leave this forum hating someone.
Farfromgeneva
Posts: 22647
Joined: Sat Feb 23, 2019 10:53 am

Re: Regulation - Too Much or Too Little?

Post by Farfromgeneva »

PizzaSnake wrote: Sat Feb 25, 2023 9:23 am
Farfromgeneva wrote: Fri Feb 24, 2023 3:42 pm Timnit Gebru Is Calling Attention to the Pitfalls of AI

The Silicon Valley veteran says big tech can’t be trusted to regulate artificial intelligence by itself.

Emily BobrowFeb. 24, 2023 12:27 pm ET
Ms. Gebru, 39, is the founder and executive director of the Distributed Artificial Intelligence Research Institute (DAIR), a nonprofit she launched in 2021 with backing from the MacArthur Foundation, the Rockefeller Foundation, the Ford Foundation and others. Much of her work involves highlighting the ways AI programs can reinforce existing prejudices. “We talk about algorithms, but we don’t talk about who’s constructing the data set or who’s in the data set,” she says. Because machine-learning systems adopt patterns of language and images scraped from the internet, they are often riddled with the internet’s all-too-human flaws: “If the input data is biased, then the output can amplify such biases.”

For years, Ms. Gebru earned notoriety as an in-house AI skeptic at big tech companies. In 2018, while she was working at Microsoft, she co-authored a study that found that commercial facial-analysis programs were far more accurate in identifying the gender of white men than Black women, which the researchers warned could lead to damaging cases of false identification. Later, while working at Google, Ms. Gebru called on companies to be more transparent about the errors baked into their AI models.

Her work as an AI ethicist at Google came to an abrupt halt in late 2020 over research she planned to publish about the shortcomings of language-based AI programs. She says Google fired her; Google said she resigned, and a representative for the company declined to comment further. She now argues that tech companies can’t be trusted to regulate themselves.

Advertisement - Scroll to Continue

Growing up, Gebru says, the drive to be the best ‘wasn’t a pressure in my family. It was the expectation.’

As a child in the Ethiopian capital of Addis Ababa, Ms. Gebru “was quite a nerd,” she recalls. Given her fascination with physics and math, she assumed she would become an electrical engineer, like her two older sisters and her father, who died when she was five. Whatever she decided to do, she felt driven to be the best. “It wasn’t a pressure in my family,” she says. “It was the expectation.”

Newsletter Sign-up

Grapevine

A weekly look at our most colorful, thought-provoking and original feature stories on the business of life.

When war broke out between Ethiopia and neighboring Eritrea in 1998, Ms. Gebru’s Eritrean-born family sought refuge overseas. After a detour in Ireland, she joined her mother, an economist, in Somerville, Mass. Ms. Gebru was already fluent in English and a strong student, so she was surprised when some of her new public school teachers tried to dissuade her from taking advanced courses. Although she recalls earning a top grade in her honors physics class, she says that her teacher suggested the AP class might be “too hard.”

More Weekend Confidential

Oksana Masters Had a Difficult Path to Athletic TriumphFebruary 17, 2023
Richard Parsons Is Investing in People Who Are OverlookedFebruary 10, 2023
Ryan Gellert Wants Patagonia to Be Part of the Environmental SolutionFebruary 3, 2023
Grace Young Wants to Keep Chinatown Restaurants in BusinessJanuary 27, 2023
Ms. Gebru went on to study electrical engineering at Stanford, where her experiments with an electronic piano helped secure an internship at Apple as an audio engineer. She cofounded a software startup—“I felt like I had to in Silicon Valley to get some respect”—before returning to Apple, where she helped develop signal processing algorithms for various products, including the first iPad. When she sensed she was more interested in algorithms than hardware, she returned to Stanford, where she earned her Ph.D. from the AI Laboratory in 2017.

As a grad student, Ms. Gebru was fascinated by the potential of AI. She worked in a lab that used a database of tens of millions of images to teach machines how to deduce a neighborhood’s demographics from its cars: Pickup trucks were popular in more Republican areas, vans correlated with more crime. By the end of her studies, however, Ms. Gebru worried about how these algorithms might be used.

In 2016 she was alarmed by a ProPublica article about how U.S. judges were increasingly relying in sentencing on an algorithm that predicted a criminal’s risk of reoffending. This software typically rated Black defendants who did not reoffend as “high risk” and white defendants who reoffended as “low risk.” “I got into this because I like building stuff,” she says. “But the more I started to do this work, the more I realized I needed to understand those kinds of harms.”

Advertisement - Scroll to Continue

image
Gebru notes that in the rush to develop AI, tech companies aren’t heeding calls to slow down.Photo: NO CROP Nicholas Albrecht for The Wall Street Journal
Although Ms. Gebru admits that her falling out with Google was stressful—“I lost 20 pounds in two weeks”—she says it clarified big tech’s approach to the ethics of AI. “There aren’t incentives for better behavior,” she says. By launching DAIR, Ms. Gebru says she hopes to offer a voice of restraint at a time when in-house critics may feel silenced: “I think it’s really difficult for people on the inside to push back.”

Ms. Gebru notes that in the rush to create a product that rivals OpenAI’s ChatGPT, which will soon power Microsoft’s Bing search engine, tech companies aren’t heeding calls to slow down. “Our recommendations basically say that before you put anything out, you have to understand what’s in your data set and document it thoroughly,” she says. “But at the end of the day this means taking more time, spending more resources and making less money. Who’s going to do that without legislation?” She hopes for laws that push tech companies to prove their products are safe, just as they do for car manufacturers and drug companies.

‘In tech we’re very good at pretending that everything is in the cloud, but these models require a lot of people, energy and water.’

At DAIR, Ms. Gebru is working to call attention to some of the hidden costs of AI, from the computational power it requires to the low wages paid to laborers who filter training data. “In tech we’re very good at pretending that everything is in the cloud, but these models require a lot of people, energy and water,” she says. “The environmental costs can be huge.”

Yet Ms. Gebru wants to make sure that DAIR isn’t merely a naysaying organization. “I didn’t get into this because I wanted to fight people or big corporations,” she says. She points to how DAIR researchers are using thousands of high resolution satellite images to better understand the legacy of apartheid in South Africa, correlating township boundaries with disparities in public services.

“It’s demoralizing to just analyze harms and try to mitigate them,” she says. “We’re also trying to imaginatively think about how technology should be built.”

Advertisement - Scroll to Continue
“Our recommendations basically say that before you put anything out, you have to understand what’s in your data set and document it thoroughly,”

Or, simply put, shite in, shite out.

Bing’s “Sydney” ingested the Internet. Anyone surprised by the results?
Put in the Bay Area this weekend and was getting a ride to my disasters from a business guy I know and spent some time with (clean energy CRE stuff I’ve mentioned here or there). He’s got a higher end tesla (“it’s the cheaper of the high end models” he tells me after we did a hike starting across the street from his pimp as heck house on the hills of an area east fo oakland called Montclair - very modest but a baller. We’re talking about it and his issue is the quantification of values as inputs in AI using an example of car turning into you and bike rider where the automation may evaluate and calculate the economic harm at risk to be higher not hitting the human being. My issue is the team latency of the inputs to an algo in private hands.

At the epicenter of it right now and it’s interesting stuff wildly divergent from Matrix style “they’re taking humans over” we’ve been conditioned to think of tech going wrong as.

My appreciation for the piece above is this quote

“In tech we’re very good at pretending that everything is in the cloud, but these models require a lot of people, energy and water,” she says. “The environmental costs can be huge.”
Same sword they knight you they gon' good night you with
Thats' only half if they like you
That ain't even the half what they might do
Don't believe me, ask Michael
See Martin, Malcolm
See Jesus, Judas; Caesar, Brutus
See success is like suicide
PizzaSnake
Posts: 4847
Joined: Tue Mar 05, 2019 8:36 pm

Re: Regulation - Too Much or Too Little?

Post by PizzaSnake »

Farfromgeneva wrote: Sat Feb 25, 2023 10:13 am
PizzaSnake wrote: Sat Feb 25, 2023 9:23 am
Farfromgeneva wrote: Fri Feb 24, 2023 3:42 pm Timnit Gebru Is Calling Attention to the Pitfalls of AI

The Silicon Valley veteran says big tech can’t be trusted to regulate artificial intelligence by itself.

Emily BobrowFeb. 24, 2023 12:27 pm ET
Ms. Gebru, 39, is the founder and executive director of the Distributed Artificial Intelligence Research Institute (DAIR), a nonprofit she launched in 2021 with backing from the MacArthur Foundation, the Rockefeller Foundation, the Ford Foundation and others. Much of her work involves highlighting the ways AI programs can reinforce existing prejudices. “We talk about algorithms, but we don’t talk about who’s constructing the data set or who’s in the data set,” she says. Because machine-learning systems adopt patterns of language and images scraped from the internet, they are often riddled with the internet’s all-too-human flaws: “If the input data is biased, then the output can amplify such biases.”

For years, Ms. Gebru earned notoriety as an in-house AI skeptic at big tech companies. In 2018, while she was working at Microsoft, she co-authored a study that found that commercial facial-analysis programs were far more accurate in identifying the gender of white men than Black women, which the researchers warned could lead to damaging cases of false identification. Later, while working at Google, Ms. Gebru called on companies to be more transparent about the errors baked into their AI models.

Her work as an AI ethicist at Google came to an abrupt halt in late 2020 over research she planned to publish about the shortcomings of language-based AI programs. She says Google fired her; Google said she resigned, and a representative for the company declined to comment further. She now argues that tech companies can’t be trusted to regulate themselves.

Advertisement - Scroll to Continue

Growing up, Gebru says, the drive to be the best ‘wasn’t a pressure in my family. It was the expectation.’

As a child in the Ethiopian capital of Addis Ababa, Ms. Gebru “was quite a nerd,” she recalls. Given her fascination with physics and math, she assumed she would become an electrical engineer, like her two older sisters and her father, who died when she was five. Whatever she decided to do, she felt driven to be the best. “It wasn’t a pressure in my family,” she says. “It was the expectation.”

Newsletter Sign-up

Grapevine

A weekly look at our most colorful, thought-provoking and original feature stories on the business of life.

When war broke out between Ethiopia and neighboring Eritrea in 1998, Ms. Gebru’s Eritrean-born family sought refuge overseas. After a detour in Ireland, she joined her mother, an economist, in Somerville, Mass. Ms. Gebru was already fluent in English and a strong student, so she was surprised when some of her new public school teachers tried to dissuade her from taking advanced courses. Although she recalls earning a top grade in her honors physics class, she says that her teacher suggested the AP class might be “too hard.”

More Weekend Confidential

Oksana Masters Had a Difficult Path to Athletic TriumphFebruary 17, 2023
Richard Parsons Is Investing in People Who Are OverlookedFebruary 10, 2023
Ryan Gellert Wants Patagonia to Be Part of the Environmental SolutionFebruary 3, 2023
Grace Young Wants to Keep Chinatown Restaurants in BusinessJanuary 27, 2023
Ms. Gebru went on to study electrical engineering at Stanford, where her experiments with an electronic piano helped secure an internship at Apple as an audio engineer. She cofounded a software startup—“I felt like I had to in Silicon Valley to get some respect”—before returning to Apple, where she helped develop signal processing algorithms for various products, including the first iPad. When she sensed she was more interested in algorithms than hardware, she returned to Stanford, where she earned her Ph.D. from the AI Laboratory in 2017.

As a grad student, Ms. Gebru was fascinated by the potential of AI. She worked in a lab that used a database of tens of millions of images to teach machines how to deduce a neighborhood’s demographics from its cars: Pickup trucks were popular in more Republican areas, vans correlated with more crime. By the end of her studies, however, Ms. Gebru worried about how these algorithms might be used.

In 2016 she was alarmed by a ProPublica article about how U.S. judges were increasingly relying in sentencing on an algorithm that predicted a criminal’s risk of reoffending. This software typically rated Black defendants who did not reoffend as “high risk” and white defendants who reoffended as “low risk.” “I got into this because I like building stuff,” she says. “But the more I started to do this work, the more I realized I needed to understand those kinds of harms.”

Advertisement - Scroll to Continue

image
Gebru notes that in the rush to develop AI, tech companies aren’t heeding calls to slow down.Photo: NO CROP Nicholas Albrecht for The Wall Street Journal
Although Ms. Gebru admits that her falling out with Google was stressful—“I lost 20 pounds in two weeks”—she says it clarified big tech’s approach to the ethics of AI. “There aren’t incentives for better behavior,” she says. By launching DAIR, Ms. Gebru says she hopes to offer a voice of restraint at a time when in-house critics may feel silenced: “I think it’s really difficult for people on the inside to push back.”

Ms. Gebru notes that in the rush to create a product that rivals OpenAI’s ChatGPT, which will soon power Microsoft’s Bing search engine, tech companies aren’t heeding calls to slow down. “Our recommendations basically say that before you put anything out, you have to understand what’s in your data set and document it thoroughly,” she says. “But at the end of the day this means taking more time, spending more resources and making less money. Who’s going to do that without legislation?” She hopes for laws that push tech companies to prove their products are safe, just as they do for car manufacturers and drug companies.

‘In tech we’re very good at pretending that everything is in the cloud, but these models require a lot of people, energy and water.’

At DAIR, Ms. Gebru is working to call attention to some of the hidden costs of AI, from the computational power it requires to the low wages paid to laborers who filter training data. “In tech we’re very good at pretending that everything is in the cloud, but these models require a lot of people, energy and water,” she says. “The environmental costs can be huge.”

Yet Ms. Gebru wants to make sure that DAIR isn’t merely a naysaying organization. “I didn’t get into this because I wanted to fight people or big corporations,” she says. She points to how DAIR researchers are using thousands of high resolution satellite images to better understand the legacy of apartheid in South Africa, correlating township boundaries with disparities in public services.

“It’s demoralizing to just analyze harms and try to mitigate them,” she says. “We’re also trying to imaginatively think about how technology should be built.”

Advertisement - Scroll to Continue
“Our recommendations basically say that before you put anything out, you have to understand what’s in your data set and document it thoroughly,”

Or, simply put, shite in, shite out.

Bing’s “Sydney” ingested the Internet. Anyone surprised by the results?
Put in the Bay Area this weekend and was getting a ride to my disasters from a business guy I know and spent some time with (clean energy CRE stuff I’ve mentioned here or there). He’s got a higher end tesla (“it’s the cheaper of the high end models” he tells me after we did a hike starting across the street from his pimp as heck house on the hills of an area east fo oakland called Montclair - very modest but a baller. We’re talking about it and his issue is the quantification of values as inputs in AI using an example of car turning into you and bike rider where the automation may evaluate and calculate the economic harm at risk to be higher not hitting the human being. My issue is the team latency of the inputs to an algo in private hands.

At the epicenter of it right now and it’s interesting stuff wildly divergent from Matrix style “they’re taking humans over” we’ve been conditioned to think of tech going wrong as.

My appreciation for the piece above is this quote

That pesky trolley problem again. Expect to see more of that calculus.

A little eye-candy for you as well. Brady was a fool.

"There is nothing more difficult and more dangerous to carry through than initiating changes. One makes enemies of those who prospered under the old order, and only lukewarm support from those who would prosper under the new."
Typical Lax Dad
Posts: 32339
Joined: Mon Jul 30, 2018 12:10 pm

Re: Regulation - Too Much or Too Little?

Post by Typical Lax Dad »

PizzaSnake wrote: Sat Feb 25, 2023 11:35 am
Farfromgeneva wrote: Sat Feb 25, 2023 10:13 am
PizzaSnake wrote: Sat Feb 25, 2023 9:23 am
Farfromgeneva wrote: Fri Feb 24, 2023 3:42 pm Timnit Gebru Is Calling Attention to the Pitfalls of AI

The Silicon Valley veteran says big tech can’t be trusted to regulate artificial intelligence by itself.

Emily BobrowFeb. 24, 2023 12:27 pm ET
Ms. Gebru, 39, is the founder and executive director of the Distributed Artificial Intelligence Research Institute (DAIR), a nonprofit she launched in 2021 with backing from the MacArthur Foundation, the Rockefeller Foundation, the Ford Foundation and others. Much of her work involves highlighting the ways AI programs can reinforce existing prejudices. “We talk about algorithms, but we don’t talk about who’s constructing the data set or who’s in the data set,” she says. Because machine-learning systems adopt patterns of language and images scraped from the internet, they are often riddled with the internet’s all-too-human flaws: “If the input data is biased, then the output can amplify such biases.”

For years, Ms. Gebru earned notoriety as an in-house AI skeptic at big tech companies. In 2018, while she was working at Microsoft, she co-authored a study that found that commercial facial-analysis programs were far more accurate in identifying the gender of white men than Black women, which the researchers warned could lead to damaging cases of false identification. Later, while working at Google, Ms. Gebru called on companies to be more transparent about the errors baked into their AI models.

Her work as an AI ethicist at Google came to an abrupt halt in late 2020 over research she planned to publish about the shortcomings of language-based AI programs. She says Google fired her; Google said she resigned, and a representative for the company declined to comment further. She now argues that tech companies can’t be trusted to regulate themselves.

Advertisement - Scroll to Continue

Growing up, Gebru says, the drive to be the best ‘wasn’t a pressure in my family. It was the expectation.’

As a child in the Ethiopian capital of Addis Ababa, Ms. Gebru “was quite a nerd,” she recalls. Given her fascination with physics and math, she assumed she would become an electrical engineer, like her two older sisters and her father, who died when she was five. Whatever she decided to do, she felt driven to be the best. “It wasn’t a pressure in my family,” she says. “It was the expectation.”

Newsletter Sign-up

Grapevine

A weekly look at our most colorful, thought-provoking and original feature stories on the business of life.

When war broke out between Ethiopia and neighboring Eritrea in 1998, Ms. Gebru’s Eritrean-born family sought refuge overseas. After a detour in Ireland, she joined her mother, an economist, in Somerville, Mass. Ms. Gebru was already fluent in English and a strong student, so she was surprised when some of her new public school teachers tried to dissuade her from taking advanced courses. Although she recalls earning a top grade in her honors physics class, she says that her teacher suggested the AP class might be “too hard.”

More Weekend Confidential

Oksana Masters Had a Difficult Path to Athletic TriumphFebruary 17, 2023
Richard Parsons Is Investing in People Who Are OverlookedFebruary 10, 2023
Ryan Gellert Wants Patagonia to Be Part of the Environmental SolutionFebruary 3, 2023
Grace Young Wants to Keep Chinatown Restaurants in BusinessJanuary 27, 2023
Ms. Gebru went on to study electrical engineering at Stanford, where her experiments with an electronic piano helped secure an internship at Apple as an audio engineer. She cofounded a software startup—“I felt like I had to in Silicon Valley to get some respect”—before returning to Apple, where she helped develop signal processing algorithms for various products, including the first iPad. When she sensed she was more interested in algorithms than hardware, she returned to Stanford, where she earned her Ph.D. from the AI Laboratory in 2017.

As a grad student, Ms. Gebru was fascinated by the potential of AI. She worked in a lab that used a database of tens of millions of images to teach machines how to deduce a neighborhood’s demographics from its cars: Pickup trucks were popular in more Republican areas, vans correlated with more crime. By the end of her studies, however, Ms. Gebru worried about how these algorithms might be used.

In 2016 she was alarmed by a ProPublica article about how U.S. judges were increasingly relying in sentencing on an algorithm that predicted a criminal’s risk of reoffending. This software typically rated Black defendants who did not reoffend as “high risk” and white defendants who reoffended as “low risk.” “I got into this because I like building stuff,” she says. “But the more I started to do this work, the more I realized I needed to understand those kinds of harms.”

Advertisement - Scroll to Continue

image
Gebru notes that in the rush to develop AI, tech companies aren’t heeding calls to slow down.Photo: NO CROP Nicholas Albrecht for The Wall Street Journal
Although Ms. Gebru admits that her falling out with Google was stressful—“I lost 20 pounds in two weeks”—she says it clarified big tech’s approach to the ethics of AI. “There aren’t incentives for better behavior,” she says. By launching DAIR, Ms. Gebru says she hopes to offer a voice of restraint at a time when in-house critics may feel silenced: “I think it’s really difficult for people on the inside to push back.”

Ms. Gebru notes that in the rush to create a product that rivals OpenAI’s ChatGPT, which will soon power Microsoft’s Bing search engine, tech companies aren’t heeding calls to slow down. “Our recommendations basically say that before you put anything out, you have to understand what’s in your data set and document it thoroughly,” she says. “But at the end of the day this means taking more time, spending more resources and making less money. Who’s going to do that without legislation?” She hopes for laws that push tech companies to prove their products are safe, just as they do for car manufacturers and drug companies.

‘In tech we’re very good at pretending that everything is in the cloud, but these models require a lot of people, energy and water.’

At DAIR, Ms. Gebru is working to call attention to some of the hidden costs of AI, from the computational power it requires to the low wages paid to laborers who filter training data. “In tech we’re very good at pretending that everything is in the cloud, but these models require a lot of people, energy and water,” she says. “The environmental costs can be huge.”

Yet Ms. Gebru wants to make sure that DAIR isn’t merely a naysaying organization. “I didn’t get into this because I wanted to fight people or big corporations,” she says. She points to how DAIR researchers are using thousands of high resolution satellite images to better understand the legacy of apartheid in South Africa, correlating township boundaries with disparities in public services.

“It’s demoralizing to just analyze harms and try to mitigate them,” she says. “We’re also trying to imaginatively think about how technology should be built.”

Advertisement - Scroll to Continue
“Our recommendations basically say that before you put anything out, you have to understand what’s in your data set and document it thoroughly,”

Or, simply put, shite in, shite out.

Bing’s “Sydney” ingested the Internet. Anyone surprised by the results?
Put in the Bay Area this weekend and was getting a ride to my disasters from a business guy I know and spent some time with (clean energy CRE stuff I’ve mentioned here or there). He’s got a higher end tesla (“it’s the cheaper of the high end models” he tells me after we did a hike starting across the street from his pimp as heck house on the hills of an area east fo oakland called Montclair - very modest but a baller. We’re talking about it and his issue is the quantification of values as inputs in AI using an example of car turning into you and bike rider where the automation may evaluate and calculate the economic harm at risk to be higher not hitting the human being. My issue is the team latency of the inputs to an algo in private hands.

At the epicenter of it right now and it’s interesting stuff wildly divergent from Matrix style “they’re taking humans over” we’ve been conditioned to think of tech going wrong as.

My appreciation for the piece above is this quote

That pesky trolley problem again. Expect to see more of that calculus.

A little eye-candy for you as well. Brady was a fool.

Probably the only Will Smith film I liked. The Four Laws.
“You lucky I ain’t read wretched yet!”
Farfromgeneva
Posts: 22647
Joined: Sat Feb 23, 2019 10:53 am

Re: Regulation - Too Much or Too Little?

Post by Farfromgeneva »

Typical Lax Dad wrote: Sat Feb 25, 2023 12:59 pm
PizzaSnake wrote: Sat Feb 25, 2023 11:35 am
Farfromgeneva wrote: Sat Feb 25, 2023 10:13 am
PizzaSnake wrote: Sat Feb 25, 2023 9:23 am
Farfromgeneva wrote: Fri Feb 24, 2023 3:42 pm Timnit Gebru Is Calling Attention to the Pitfalls of AI

The Silicon Valley veteran says big tech can’t be trusted to regulate artificial intelligence by itself.

Emily BobrowFeb. 24, 2023 12:27 pm ET
Ms. Gebru, 39, is the founder and executive director of the Distributed Artificial Intelligence Research Institute (DAIR), a nonprofit she launched in 2021 with backing from the MacArthur Foundation, the Rockefeller Foundation, the Ford Foundation and others. Much of her work involves highlighting the ways AI programs can reinforce existing prejudices. “We talk about algorithms, but we don’t talk about who’s constructing the data set or who’s in the data set,” she says. Because machine-learning systems adopt patterns of language and images scraped from the internet, they are often riddled with the internet’s all-too-human flaws: “If the input data is biased, then the output can amplify such biases.”

For years, Ms. Gebru earned notoriety as an in-house AI skeptic at big tech companies. In 2018, while she was working at Microsoft, she co-authored a study that found that commercial facial-analysis programs were far more accurate in identifying the gender of white men than Black women, which the researchers warned could lead to damaging cases of false identification. Later, while working at Google, Ms. Gebru called on companies to be more transparent about the errors baked into their AI models.

Her work as an AI ethicist at Google came to an abrupt halt in late 2020 over research she planned to publish about the shortcomings of language-based AI programs. She says Google fired her; Google said she resigned, and a representative for the company declined to comment further. She now argues that tech companies can’t be trusted to regulate themselves.

Advertisement - Scroll to Continue

Growing up, Gebru says, the drive to be the best ‘wasn’t a pressure in my family. It was the expectation.’

As a child in the Ethiopian capital of Addis Ababa, Ms. Gebru “was quite a nerd,” she recalls. Given her fascination with physics and math, she assumed she would become an electrical engineer, like her two older sisters and her father, who died when she was five. Whatever she decided to do, she felt driven to be the best. “It wasn’t a pressure in my family,” she says. “It was the expectation.”

Newsletter Sign-up

Grapevine

A weekly look at our most colorful, thought-provoking and original feature stories on the business of life.

When war broke out between Ethiopia and neighboring Eritrea in 1998, Ms. Gebru’s Eritrean-born family sought refuge overseas. After a detour in Ireland, she joined her mother, an economist, in Somerville, Mass. Ms. Gebru was already fluent in English and a strong student, so she was surprised when some of her new public school teachers tried to dissuade her from taking advanced courses. Although she recalls earning a top grade in her honors physics class, she says that her teacher suggested the AP class might be “too hard.”

More Weekend Confidential

Oksana Masters Had a Difficult Path to Athletic TriumphFebruary 17, 2023
Richard Parsons Is Investing in People Who Are OverlookedFebruary 10, 2023
Ryan Gellert Wants Patagonia to Be Part of the Environmental SolutionFebruary 3, 2023
Grace Young Wants to Keep Chinatown Restaurants in BusinessJanuary 27, 2023
Ms. Gebru went on to study electrical engineering at Stanford, where her experiments with an electronic piano helped secure an internship at Apple as an audio engineer. She cofounded a software startup—“I felt like I had to in Silicon Valley to get some respect”—before returning to Apple, where she helped develop signal processing algorithms for various products, including the first iPad. When she sensed she was more interested in algorithms than hardware, she returned to Stanford, where she earned her Ph.D. from the AI Laboratory in 2017.

As a grad student, Ms. Gebru was fascinated by the potential of AI. She worked in a lab that used a database of tens of millions of images to teach machines how to deduce a neighborhood’s demographics from its cars: Pickup trucks were popular in more Republican areas, vans correlated with more crime. By the end of her studies, however, Ms. Gebru worried about how these algorithms might be used.

In 2016 she was alarmed by a ProPublica article about how U.S. judges were increasingly relying in sentencing on an algorithm that predicted a criminal’s risk of reoffending. This software typically rated Black defendants who did not reoffend as “high risk” and white defendants who reoffended as “low risk.” “I got into this because I like building stuff,” she says. “But the more I started to do this work, the more I realized I needed to understand those kinds of harms.”

Advertisement - Scroll to Continue

image
Gebru notes that in the rush to develop AI, tech companies aren’t heeding calls to slow down.Photo: NO CROP Nicholas Albrecht for The Wall Street Journal
Although Ms. Gebru admits that her falling out with Google was stressful—“I lost 20 pounds in two weeks”—she says it clarified big tech’s approach to the ethics of AI. “There aren’t incentives for better behavior,” she says. By launching DAIR, Ms. Gebru says she hopes to offer a voice of restraint at a time when in-house critics may feel silenced: “I think it’s really difficult for people on the inside to push back.”

Ms. Gebru notes that in the rush to create a product that rivals OpenAI’s ChatGPT, which will soon power Microsoft’s Bing search engine, tech companies aren’t heeding calls to slow down. “Our recommendations basically say that before you put anything out, you have to understand what’s in your data set and document it thoroughly,” she says. “But at the end of the day this means taking more time, spending more resources and making less money. Who’s going to do that without legislation?” She hopes for laws that push tech companies to prove their products are safe, just as they do for car manufacturers and drug companies.

‘In tech we’re very good at pretending that everything is in the cloud, but these models require a lot of people, energy and water.’

At DAIR, Ms. Gebru is working to call attention to some of the hidden costs of AI, from the computational power it requires to the low wages paid to laborers who filter training data. “In tech we’re very good at pretending that everything is in the cloud, but these models require a lot of people, energy and water,” she says. “The environmental costs can be huge.”

Yet Ms. Gebru wants to make sure that DAIR isn’t merely a naysaying organization. “I didn’t get into this because I wanted to fight people or big corporations,” she says. She points to how DAIR researchers are using thousands of high resolution satellite images to better understand the legacy of apartheid in South Africa, correlating township boundaries with disparities in public services.

“It’s demoralizing to just analyze harms and try to mitigate them,” she says. “We’re also trying to imaginatively think about how technology should be built.”

Advertisement - Scroll to Continue
“Our recommendations basically say that before you put anything out, you have to understand what’s in your data set and document it thoroughly,”

Or, simply put, shite in, shite out.

Bing’s “Sydney” ingested the Internet. Anyone surprised by the results?
Put in the Bay Area this weekend and was getting a ride to my disasters from a business guy I know and spent some time with (clean energy CRE stuff I’ve mentioned here or there). He’s got a higher end tesla (“it’s the cheaper of the high end models” he tells me after we did a hike starting across the street from his pimp as heck house on the hills of an area east fo oakland called Montclair - very modest but a baller. We’re talking about it and his issue is the quantification of values as inputs in AI using an example of car turning into you and bike rider where the automation may evaluate and calculate the economic harm at risk to be higher not hitting the human being. My issue is the team latency of the inputs to an algo in private hands.

At the epicenter of it right now and it’s interesting stuff wildly divergent from Matrix style “they’re taking humans over” we’ve been conditioned to think of tech going wrong as.

My appreciation for the piece above is this quote

That pesky trolley problem again. Expect to see more of that calculus.

A little eye-candy for you as well. Brady was a fool.

Probably the only Will Smith film I liked. The Four Laws.
6 degrees of separation? (Play better than film but solid)
Same sword they knight you they gon' good night you with
Thats' only half if they like you
That ain't even the half what they might do
Don't believe me, ask Michael
See Martin, Malcolm
See Jesus, Judas; Caesar, Brutus
See success is like suicide
Typical Lax Dad
Posts: 32339
Joined: Mon Jul 30, 2018 12:10 pm

Re: Regulation - Too Much or Too Little?

Post by Typical Lax Dad »

Farfromgeneva wrote: Sat Feb 25, 2023 2:11 pm
Typical Lax Dad wrote: Sat Feb 25, 2023 12:59 pm
PizzaSnake wrote: Sat Feb 25, 2023 11:35 am
Farfromgeneva wrote: Sat Feb 25, 2023 10:13 am
PizzaSnake wrote: Sat Feb 25, 2023 9:23 am
Farfromgeneva wrote: Fri Feb 24, 2023 3:42 pm Timnit Gebru Is Calling Attention to the Pitfalls of AI

The Silicon Valley veteran says big tech can’t be trusted to regulate artificial intelligence by itself.

Emily BobrowFeb. 24, 2023 12:27 pm ET
Ms. Gebru, 39, is the founder and executive director of the Distributed Artificial Intelligence Research Institute (DAIR), a nonprofit she launched in 2021 with backing from the MacArthur Foundation, the Rockefeller Foundation, the Ford Foundation and others. Much of her work involves highlighting the ways AI programs can reinforce existing prejudices. “We talk about algorithms, but we don’t talk about who’s constructing the data set or who’s in the data set,” she says. Because machine-learning systems adopt patterns of language and images scraped from the internet, they are often riddled with the internet’s all-too-human flaws: “If the input data is biased, then the output can amplify such biases.”

For years, Ms. Gebru earned notoriety as an in-house AI skeptic at big tech companies. In 2018, while she was working at Microsoft, she co-authored a study that found that commercial facial-analysis programs were far more accurate in identifying the gender of white men than Black women, which the researchers warned could lead to damaging cases of false identification. Later, while working at Google, Ms. Gebru called on companies to be more transparent about the errors baked into their AI models.

Her work as an AI ethicist at Google came to an abrupt halt in late 2020 over research she planned to publish about the shortcomings of language-based AI programs. She says Google fired her; Google said she resigned, and a representative for the company declined to comment further. She now argues that tech companies can’t be trusted to regulate themselves.

Advertisement - Scroll to Continue

Growing up, Gebru says, the drive to be the best ‘wasn’t a pressure in my family. It was the expectation.’

As a child in the Ethiopian capital of Addis Ababa, Ms. Gebru “was quite a nerd,” she recalls. Given her fascination with physics and math, she assumed she would become an electrical engineer, like her two older sisters and her father, who died when she was five. Whatever she decided to do, she felt driven to be the best. “It wasn’t a pressure in my family,” she says. “It was the expectation.”

Newsletter Sign-up

Grapevine

A weekly look at our most colorful, thought-provoking and original feature stories on the business of life.

When war broke out between Ethiopia and neighboring Eritrea in 1998, Ms. Gebru’s Eritrean-born family sought refuge overseas. After a detour in Ireland, she joined her mother, an economist, in Somerville, Mass. Ms. Gebru was already fluent in English and a strong student, so she was surprised when some of her new public school teachers tried to dissuade her from taking advanced courses. Although she recalls earning a top grade in her honors physics class, she says that her teacher suggested the AP class might be “too hard.”

More Weekend Confidential

Oksana Masters Had a Difficult Path to Athletic TriumphFebruary 17, 2023
Richard Parsons Is Investing in People Who Are OverlookedFebruary 10, 2023
Ryan Gellert Wants Patagonia to Be Part of the Environmental SolutionFebruary 3, 2023
Grace Young Wants to Keep Chinatown Restaurants in BusinessJanuary 27, 2023
Ms. Gebru went on to study electrical engineering at Stanford, where her experiments with an electronic piano helped secure an internship at Apple as an audio engineer. She cofounded a software startup—“I felt like I had to in Silicon Valley to get some respect”—before returning to Apple, where she helped develop signal processing algorithms for various products, including the first iPad. When she sensed she was more interested in algorithms than hardware, she returned to Stanford, where she earned her Ph.D. from the AI Laboratory in 2017.

As a grad student, Ms. Gebru was fascinated by the potential of AI. She worked in a lab that used a database of tens of millions of images to teach machines how to deduce a neighborhood’s demographics from its cars: Pickup trucks were popular in more Republican areas, vans correlated with more crime. By the end of her studies, however, Ms. Gebru worried about how these algorithms might be used.

In 2016 she was alarmed by a ProPublica article about how U.S. judges were increasingly relying in sentencing on an algorithm that predicted a criminal’s risk of reoffending. This software typically rated Black defendants who did not reoffend as “high risk” and white defendants who reoffended as “low risk.” “I got into this because I like building stuff,” she says. “But the more I started to do this work, the more I realized I needed to understand those kinds of harms.”

Advertisement - Scroll to Continue

image
Gebru notes that in the rush to develop AI, tech companies aren’t heeding calls to slow down.Photo: NO CROP Nicholas Albrecht for The Wall Street Journal
Although Ms. Gebru admits that her falling out with Google was stressful—“I lost 20 pounds in two weeks”—she says it clarified big tech’s approach to the ethics of AI. “There aren’t incentives for better behavior,” she says. By launching DAIR, Ms. Gebru says she hopes to offer a voice of restraint at a time when in-house critics may feel silenced: “I think it’s really difficult for people on the inside to push back.”

Ms. Gebru notes that in the rush to create a product that rivals OpenAI’s ChatGPT, which will soon power Microsoft’s Bing search engine, tech companies aren’t heeding calls to slow down. “Our recommendations basically say that before you put anything out, you have to understand what’s in your data set and document it thoroughly,” she says. “But at the end of the day this means taking more time, spending more resources and making less money. Who’s going to do that without legislation?” She hopes for laws that push tech companies to prove their products are safe, just as they do for car manufacturers and drug companies.

‘In tech we’re very good at pretending that everything is in the cloud, but these models require a lot of people, energy and water.’

At DAIR, Ms. Gebru is working to call attention to some of the hidden costs of AI, from the computational power it requires to the low wages paid to laborers who filter training data. “In tech we’re very good at pretending that everything is in the cloud, but these models require a lot of people, energy and water,” she says. “The environmental costs can be huge.”

Yet Ms. Gebru wants to make sure that DAIR isn’t merely a naysaying organization. “I didn’t get into this because I wanted to fight people or big corporations,” she says. She points to how DAIR researchers are using thousands of high resolution satellite images to better understand the legacy of apartheid in South Africa, correlating township boundaries with disparities in public services.

“It’s demoralizing to just analyze harms and try to mitigate them,” she says. “We’re also trying to imaginatively think about how technology should be built.”

Advertisement - Scroll to Continue
“Our recommendations basically say that before you put anything out, you have to understand what’s in your data set and document it thoroughly,”

Or, simply put, shite in, shite out.

Bing’s “Sydney” ingested the Internet. Anyone surprised by the results?
Put in the Bay Area this weekend and was getting a ride to my disasters from a business guy I know and spent some time with (clean energy CRE stuff I’ve mentioned here or there). He’s got a higher end tesla (“it’s the cheaper of the high end models” he tells me after we did a hike starting across the street from his pimp as heck house on the hills of an area east fo oakland called Montclair - very modest but a baller. We’re talking about it and his issue is the quantification of values as inputs in AI using an example of car turning into you and bike rider where the automation may evaluate and calculate the economic harm at risk to be higher not hitting the human being. My issue is the team latency of the inputs to an algo in private hands.

At the epicenter of it right now and it’s interesting stuff wildly divergent from Matrix style “they’re taking humans over” we’ve been conditioned to think of tech going wrong as.

My appreciation for the piece above is this quote

That pesky trolley problem again. Expect to see more of that calculus.

A little eye-candy for you as well. Brady was a fool.

Probably the only Will Smith film I liked. The Four Laws.
6 degrees of separation? (Play better than film but solid)
First film. His “Hollywood Indoctrination”…. Captured on film.
“You lucky I ain’t read wretched yet!”
PizzaSnake
Posts: 4847
Joined: Tue Mar 05, 2019 8:36 pm

Re: Regulation - Too Much or Too Little?

Post by PizzaSnake »

Good news.

https://www.theguardian.com/us-news/202 ... ys-average

“But such accidents are happening with striking regularity. A Guardian analysis of data collected by the Environmental Protection Agency (EPA) and by non-profit groups that track chemical accidents in the US shows that accidental releases – be they through train derailments, truck crashes, pipeline ruptures or industrial plant leaks and spills – are happening consistently across the country.

By one estimate these incidents are occurring, on average, every two days.”
"There is nothing more difficult and more dangerous to carry through than initiating changes. One makes enemies of those who prospered under the old order, and only lukewarm support from those who would prosper under the new."
PizzaSnake
Posts: 4847
Joined: Tue Mar 05, 2019 8:36 pm

Re: Regulation - Too Much or Too Little?

Post by PizzaSnake »

HooDat wrote: Fri Feb 24, 2023 12:43 pm I am REALLY enjoying the discussion:




Typical Lax Dad wrote: Thu Feb 23, 2023 8:18 pm Sarcasm. I am not sure how the fancy degree attracts kids that are not as competent and the more competent pursue less fancy degrees….Not sure how that works. How are these pools distinguished? Hoping Hoodatis can explain how that works.
see the quote from Pizzasnake below, add in a dash of my response to you below about fancy people, stir it all up in a hubris sauce and there you have it...
Farfromgeneva wrote: Thu Feb 23, 2023 8:33 pm So there is one reason an occasional kid might “trade down”.
there are so many reasons a kids may trade DOWN (?!?!?). That right there is where the fancy comes in....
Typical Lax Dad wrote: Thu Feb 23, 2023 8:40 pmAnyway….I am not sure if Hoodatis means fancy degrees or fancy schools.
I mean FANCY people, who think their degrees from FANCY schools make them more important than regular people....
Farfromgeneva wrote: Thu Feb 23, 2023 9:20 pm Don’t forget I’m a rent seeking POS banker with a MBA as well. Not fancy but then again I really really hate that word. Something about it grates at my nerves even if I can’t explain why.
I used the word fancy for all the reasons you imply here.... By the way, I am also a POS banker with an MBA, staring at my wall sized degree from a school that likes to dabble in fancy-ness..... (I made up another word!)
Typical Lax Dad wrote: Thu Feb 23, 2023 9:22 pm Also could be fancy people.
ding, ding, ding
PizzaSnake wrote: Fri Feb 24, 2023 11:21 am "Credentialing" for subject matter expertise is just one part of the process. Establish a standard, in this case subject matter mastery, then measure adherence, then enforce. In the cases cited above, what happened when the deficiency was determined (measured)? Was meaningful, corrective action taken that would improve the state of those whose well-being was served by the standard? Certain professions, medical doctors, appear to be the worst in the "corrective" phase of the process due to either "tribal" loyalty to members of the "club" or, most probably, fear of litigation. Even informal mechanisms for quality control have been debased by the rise of the Internet and false review production.

As I like to say, "build a better mousetrap and I'll build you a better fool." Humans as a group are very, very clever. Any sort of system designed by other humans to modify or correct anti-social behavior will quickly be circumvented. Ever wonder why legislatures never finish their work? Vive la innovation...
and the basis of me starting this conversation is that I see little evidence of subject matter mastery. Instead I see evidence of system mastery - experts at playing the game to get the fancy degree from the fancy school and get a job with the fancy people. (ffg - are your teeth on edge with all those fancies in there?)
Not many want to do the “real work.”

“We all know the real work in whatever field it is we’ve mastered. It’s shorthand, one might say, for the difference between accomplishment and mere achievement. Yet the real work doesn’t seem to be a goal of the way we live, which favours, over the real work, what we might call the rote work. We live in an achievement-driven society in which kids of all kinds and classes are perpetually being pushed toward the next evanescent achievement instead of the next enduring accomplishment. Yet anyone who is a parent of any sensitivity at all recognises that what really stirs and moves children isn’t the “A” you get in the test. No, what really moves and stirs us is accomplishment, that moment of mastery when suddenly we feel that something profoundly difficult, tenaciously thorny, has given way and we are now the Master of It, instead of us being mastered by it. That feeling may not be the very best feeling in life – there are a few competitive others – but it is, I’ve come to believe, the most sustaining feeling. I know how to do this and this is the thing I know how to do.”

https://www.theguardian.com/lifeandstyl ... raw-skills
"There is nothing more difficult and more dangerous to carry through than initiating changes. One makes enemies of those who prospered under the old order, and only lukewarm support from those who would prosper under the new."
Typical Lax Dad
Posts: 32339
Joined: Mon Jul 30, 2018 12:10 pm

Re: Regulation - Too Much or Too Little?

Post by Typical Lax Dad »

“I mean FANCY people, who think their degrees from FANCY schools make them more important than regular people....” Hoodatis….

This is a laughable statement….wrapped up in the idea of being circumspect….. You think people from non fancy schools, with non fancy degrees don’t believe they are more import, or even worse, “better” than some regular people? …like I said, I don’t believe in stereotypes. People are people. Some are good and some are bad. Maybe we only know what we see? That adverse selection thing again.
“You lucky I ain’t read wretched yet!”
Farfromgeneva
Posts: 22647
Joined: Sat Feb 23, 2019 10:53 am

Re: Regulation - Too Much or Too Little?

Post by Farfromgeneva »

Typical Lax Dad wrote: Sun Feb 26, 2023 12:52 pm “I mean FANCY people, who think their degrees from FANCY schools make them more important than regular people....” Hoodatis….

This is a laughable statement….wrapped up in the idea of being circumspect….. You think people from non fancy schools, with non fancy degrees don’t believe they are more import, or even worse, “better” than some regular people? …like I said, I don’t believe in stereotypes. People are people. Some are good and some are bad. Maybe we only know what we see? That adverse selection thing again.
https://m.youtube.com/watch?v=kj5EZ0Aggtk
Same sword they knight you they gon' good night you with
Thats' only half if they like you
That ain't even the half what they might do
Don't believe me, ask Michael
See Martin, Malcolm
See Jesus, Judas; Caesar, Brutus
See success is like suicide
User avatar
HooDat
Posts: 2373
Joined: Mon Jul 30, 2018 12:26 pm

Re: Regulation - Too Much or Too Little?

Post by HooDat »

Typical Lax Dad wrote: Sun Feb 26, 2023 12:52 pm “I mean FANCY people, who think their degrees from FANCY schools make them more important than regular people....” Hoodatis….

This is a laughable statement….wrapped up in the idea of being circumspect….. You think people from non fancy schools, with non fancy degrees don’t believe they are more import, or even worse, “better” than some regular people? …like I said, I don’t believe in stereotypes. People are people. Some are good and some are bad. Maybe we only know what we see? That adverse selection thing again.
You are free to believe what you want, esp if it makes you feel better about yourself, but stereotypes exist for a reason. And of course stereotypes are not universal (nothing is). But, I am not being the lest bit circumspect - I stand behind the fact that a lot more fancy people from fancy schools think they are better that everyone else (than equivilent non-fancy people from non-fancy schools) - because they have been told that over and over and over again. Why wouldn't they believe it?
STILL somewhere back in the day....

...and waiting/hoping for a tinfoil hat emoji......
User avatar
HooDat
Posts: 2373
Joined: Mon Jul 30, 2018 12:26 pm

Re: Regulation - Too Much or Too Little?

Post by HooDat »

PizzaSnake wrote: Sun Feb 26, 2023 12:02 pm We live in an achievement-driven society in which kids of all kinds and classes are perpetually being pushed toward the next evanescent achievement instead of the next enduring accomplishment.
I feel seen :D
STILL somewhere back in the day....

...and waiting/hoping for a tinfoil hat emoji......
Typical Lax Dad
Posts: 32339
Joined: Mon Jul 30, 2018 12:10 pm

Re: Regulation - Too Much or Too Little?

Post by Typical Lax Dad »

HooDat wrote: Mon Feb 27, 2023 10:53 am
Typical Lax Dad wrote: Sun Feb 26, 2023 12:52 pm “I mean FANCY people, who think their degrees from FANCY schools make them more important than regular people....” Hoodatis….

This is a laughable statement….wrapped up in the idea of being circumspect….. You think people from non fancy schools, with non fancy degrees don’t believe they are more import, or even worse, “better” than some regular people? …like I said, I don’t believe in stereotypes. People are people. Some are good and some are bad. Maybe we only know what we see? That adverse selection thing again.
You are free to believe what you want, esp if it makes you feel better about yourself, but stereotypes exist for a reason. And of course stereotypes are not universal (nothing is). But, I am not being the lest bit circumspect - I stand behind the fact that a lot more fancy people from fancy schools think they are better that everyone else (than equivilent non-fancy people from non-fancy schools) - because they have been told that over and over and over again. Why wouldn't they believe it?
The reason isn’t because they are always true. How do you know what an electrician’s kid that goes to a fancy school has been told over and over again? You believe in stereotypes. It’s a free country.
“You lucky I ain’t read wretched yet!”
User avatar
HooDat
Posts: 2373
Joined: Mon Jul 30, 2018 12:26 pm

Re: Regulation - Too Much or Too Little?

Post by HooDat »

Typical Lax Dad wrote: Mon Feb 27, 2023 10:57 am
HooDat wrote: Mon Feb 27, 2023 10:53 am
Typical Lax Dad wrote: Sun Feb 26, 2023 12:52 pm “I mean FANCY people, who think their degrees from FANCY schools make them more important than regular people....” Hoodatis….

This is a laughable statement….wrapped up in the idea of being circumspect….. You think people from non fancy schools, with non fancy degrees don’t believe they are more import, or even worse, “better” than some regular people? …like I said, I don’t believe in stereotypes. People are people. Some are good and some are bad. Maybe we only know what we see? That adverse selection thing again.
You are free to believe what you want, esp if it makes you feel better about yourself, but stereotypes exist for a reason. And of course stereotypes are not universal (nothing is). But, I am not being the lest bit circumspect - I stand behind the fact that a lot more fancy people from fancy schools think they are better that everyone else (than equivilent non-fancy people from non-fancy schools) - because they have been told that over and over and over again. Why wouldn't they believe it?
The reason isn’t because they are always true. How do you know what an electrician’s kid that goes to a fancy school has been told over and over again? You believe in stereotypes. It’s a free country.
You need to learn to hold more than one idea in your head at a time and not try to force everything into a universal one dimensional model. NOTHING is EVER always true.
STILL somewhere back in the day....

...and waiting/hoping for a tinfoil hat emoji......
Typical Lax Dad
Posts: 32339
Joined: Mon Jul 30, 2018 12:10 pm

Re: Regulation - Too Much or Too Little?

Post by Typical Lax Dad »

HooDat wrote: Mon Feb 27, 2023 10:59 am
Typical Lax Dad wrote: Mon Feb 27, 2023 10:57 am
HooDat wrote: Mon Feb 27, 2023 10:53 am
Typical Lax Dad wrote: Sun Feb 26, 2023 12:52 pm “I mean FANCY people, who think their degrees from FANCY schools make them more important than regular people....” Hoodatis….

This is a laughable statement….wrapped up in the idea of being circumspect….. You think people from non fancy schools, with non fancy degrees don’t believe they are more import, or even worse, “better” than some regular people? …like I said, I don’t believe in stereotypes. People are people. Some are good and some are bad. Maybe we only know what we see? That adverse selection thing again.
You are free to believe what you want, esp if it makes you feel better about yourself, but stereotypes exist for a reason. And of course stereotypes are not universal (nothing is). But, I am not being the lest bit circumspect - I stand behind the fact that a lot more fancy people from fancy schools think they are better that everyone else (than equivilent non-fancy people from non-fancy schools) - because they have been told that over and over and over again. Why wouldn't they believe it?
The reason isn’t because they are always true. How do you know what an electrician’s kid that goes to a fancy school has been told over and over again? You believe in stereotypes. It’s a free country.
You need to learn to hold more than one idea in your head at a time and not try to force everything into a universal one dimensional model. NOTHING is EVER always true.
Un huh. You are the guy that made a generalization….not me. Your opinion can be just as wrong as mine. You should stop hiring kids from fancy schools with fancy degrees and definitely should not encourage anyone to go to a fancy school and pursue a fancy degree, even if it’s the student’s choice.

Why are there stereotypes and which are true and which are false?

EDIT: Good win Saturday night
“You lucky I ain’t read wretched yet!”
User avatar
HooDat
Posts: 2373
Joined: Mon Jul 30, 2018 12:26 pm

Re: Regulation - Too Much or Too Little?

Post by HooDat »

Typical Lax Dad wrote: Mon Feb 27, 2023 11:02 am
HooDat wrote: Mon Feb 27, 2023 10:59 am
Typical Lax Dad wrote: Mon Feb 27, 2023 10:57 am
HooDat wrote: Mon Feb 27, 2023 10:53 am
Typical Lax Dad wrote: Sun Feb 26, 2023 12:52 pm “I mean FANCY people, who think their degrees from FANCY schools make them more important than regular people....” Hoodatis….

This is a laughable statement….wrapped up in the idea of being circumspect….. You think people from non fancy schools, with non fancy degrees don’t believe they are more import, or even worse, “better” than some regular people? …like I said, I don’t believe in stereotypes. People are people. Some are good and some are bad. Maybe we only know what we see? That adverse selection thing again.
You are free to believe what you want, esp if it makes you feel better about yourself, but stereotypes exist for a reason. And of course stereotypes are not universal (nothing is). But, I am not being the lest bit circumspect - I stand behind the fact that a lot more fancy people from fancy schools think they are better that everyone else (than equivilent non-fancy people from non-fancy schools) - because they have been told that over and over and over again. Why wouldn't they believe it?
The reason isn’t because they are always true. How do you know what an electrician’s kid that goes to a fancy school has been told over and over again? You believe in stereotypes. It’s a free country.
You need to learn to hold more than one idea in your head at a time and not try to force everything into a universal one dimensional model. NOTHING is EVER always true.
Un huh. You are the guy that made a generalization….not me. Your opinion can be just as wrong as mine. You should stop hiring kids from fancy schools with fancy degrees and definitely should not encourage anyone to go to a fancy school and pursue a fancy degree, even if it’s the student’s choice.

Why are there stereotypes and which are true and which are false?

EDIT: Good win Saturday night
TLD - you seem to have skipped over an important part of my post - highlighted above.

The best place for each unique individual to go to school really depends on the type of life they want to build for themselves when they graduate.

re: EDIT - UVA looked pretty good against a strong Ohio State team. I was particularly pleased to see the defense play well.
STILL somewhere back in the day....

...and waiting/hoping for a tinfoil hat emoji......
Typical Lax Dad
Posts: 32339
Joined: Mon Jul 30, 2018 12:10 pm

Re: Regulation - Too Much or Too Little?

Post by Typical Lax Dad »

HooDat wrote: Mon Feb 27, 2023 11:13 am
Typical Lax Dad wrote: Mon Feb 27, 2023 11:02 am
HooDat wrote: Mon Feb 27, 2023 10:59 am
Typical Lax Dad wrote: Mon Feb 27, 2023 10:57 am
HooDat wrote: Mon Feb 27, 2023 10:53 am
Typical Lax Dad wrote: Sun Feb 26, 2023 12:52 pm “I mean FANCY people, who think their degrees from FANCY schools make them more important than regular people....” Hoodatis….

This is a laughable statement….wrapped up in the idea of being circumspect….. You think people from non fancy schools, with non fancy degrees don’t believe they are more import, or even worse, “better” than some regular people? …like I said, I don’t believe in stereotypes. People are people. Some are good and some are bad. Maybe we only know what we see? That adverse selection thing again.
You are free to believe what you want, esp if it makes you feel better about yourself, but stereotypes exist for a reason. And of course stereotypes are not universal (nothing is). But, I am not being the lest bit circumspect - I stand behind the fact that a lot more fancy people from fancy schools think they are better that everyone else (than equivilent non-fancy people from non-fancy schools) - because they have been told that over and over and over again. Why wouldn't they believe it?
The reason isn’t because they are always true. How do you know what an electrician’s kid that goes to a fancy school has been told over and over again? You believe in stereotypes. It’s a free country.
You need to learn to hold more than one idea in your head at a time and not try to force everything into a universal one dimensional model. NOTHING is EVER always true.
Un huh. You are the guy that made a generalization….not me. Your opinion can be just as wrong as mine. You should stop hiring kids from fancy schools with fancy degrees and definitely should not encourage anyone to go to a fancy school and pursue a fancy degree, even if it’s the student’s choice.

Why are there stereotypes and which are true and which are false?

EDIT: Good win Saturday night
TLD - you seem to have skipped over an important part of my post - highlighted above.

The best place for each unique individual to go to school really depends on the type of life they want to build for themselves when they graduate.

re: EDIT - UVA looked pretty good against a strong Ohio State team. I was particularly pleased to see the defense play well.
I saw what you wrote. See my bold….when is it appropriate to use a stereotype for a person and when isn’t it appropriate? I don’t avoid stereotypes to make myself feel better. I was raised to take people as they come and have learned enough to realize stereotypes are not a valid way to categorize people.
“You lucky I ain’t read wretched yet!”
User avatar
HooDat
Posts: 2373
Joined: Mon Jul 30, 2018 12:26 pm

Re: Regulation - Too Much or Too Little?

Post by HooDat »

Typical Lax Dad wrote: Mon Feb 27, 2023 11:19 am I saw what you wrote. See my bold….when is it appropriate to use a stereotype for a person and when isn’t it appropriate? I don’t avoid stereotypes to make myself feel better. I was raised to take people as they come and have learned enough to realize stereotypes are not a valid way to categorize people.
Psychologically speaking, our brains are literally wired to stereotype. We would go insane if we didn't, because our brains are "throw everything we see into a category as fast as I can" machines. It is how we limit wasting our attention the wrong things in order to focus on the things we need to in order to survive.

If you can't admit that your brain is trying to stereotype people and understand how, then you are not going to be able to consciously resist any biases that may come as a result of those unacknowledged stereotypes.

Of course we should take every individual as the unique being that they are. But that individual is informed by their background - and how they leaned into or away from the stereotypes that defined their circumstances are probably some of the most important (and interesting) parts of what makes them unique.
STILL somewhere back in the day....

...and waiting/hoping for a tinfoil hat emoji......
Typical Lax Dad
Posts: 32339
Joined: Mon Jul 30, 2018 12:10 pm

Re: Regulation - Too Much or Too Little?

Post by Typical Lax Dad »

HooDat wrote: Mon Feb 27, 2023 11:33 am
Typical Lax Dad wrote: Mon Feb 27, 2023 11:19 am I saw what you wrote. See my bold….when is it appropriate to use a stereotype for a person and when isn’t it appropriate? I don’t avoid stereotypes to make myself feel better. I was raised to take people as they come and have learned enough to realize stereotypes are not a valid way to categorize people.
Psychologically speaking, our brains are literally wired to stereotype. We would go insane if we didn't, because our brains are "throw everything we see into a category as fast as I can" machines. It is how we limit wasting our attention the wrong things in order to focus on the things we need to in order to survive.

If you can't admit that your brain is trying to stereotype people and understand how, then you are not going to be able to consciously resist any biases that may come as a result of those unacknowledged stereotypes.

Of course we should take every individual as the unique being that they are. But that individual is informed by their background - and how they leaned into or away from the stereotypes that defined their circumstances are probably some of the most important (and interesting) parts of what makes them unique.
Because I am aware of it, is precisely why I try to avoid it….consciously….you seem to rely on it….. many biases come from the media. Being a student of mass media communication in my formative years and having to research it, is why I avoid using stereotypes….you never answered my questions. When is it appropriate to use stereotypes and when isn’t it and which stereotypes are true and which aren’t? You can feel free to believe that it’s fine to stereotype type folk. I wasn’t raised that way and wasn’t taught that way.

“The researchers say stereotypes appear to form and evolve because people share similar cognitive limitations and biases.

People are more likely to confuse the identity of individuals when they belong to the same social category than when they belong to different categories. Similarly, people are more likely to mistakenly think that individuals who belong to the same social category also share the same attributes. Because we all experience the same category-based memory biases, when social information is repeatedly shared it is continually filtered as it passes from one mind to the next until eventually it becomes organised categorically and a stereotype has formed.

The scientists say their research appears to explain why some stereotypes have a basis in reality while others have no obvious origin.

“For example, the cultural stereotype of Scottish people includes attributes that are overrepresented among Scots, such as wearing kilts and having red hair, but also attributes that seemingly have no basis in reality, such as being miserly or dour,” says Martin.”

One or my good friends is from Glasgow. He is just the opposite of this…..
“You lucky I ain’t read wretched yet!”
User avatar
HooDat
Posts: 2373
Joined: Mon Jul 30, 2018 12:26 pm

Re: Regulation - Too Much or Too Little?

Post by HooDat »

Typical Lax Dad wrote: Mon Feb 27, 2023 11:39 am Because I am aware of it, is precisely why I try to avoid it….consciously….you seem to rely on it
I don't see where you get that I rely on it.

But I will admit I am fascinated by how people deviate and conform to stereotypes.
STILL somewhere back in the day....

...and waiting/hoping for a tinfoil hat emoji......
Typical Lax Dad
Posts: 32339
Joined: Mon Jul 30, 2018 12:10 pm

Re: Regulation - Too Much or Too Little?

Post by Typical Lax Dad »

HooDat wrote: Mon Feb 27, 2023 11:43 am
Typical Lax Dad wrote: Mon Feb 27, 2023 11:39 am Because I am aware of it, is precisely why I try to avoid it….consciously….you seem to rely on it
I don't see where you get that I rely on it.

But I will admit I am fascinated by how people deviate and conform to stereotypes.
Fancy schools, fancy degrees…kids told all their lives they are better….they believe they are better etc. Anyway, hope a kid from a fancy school with a fancy degree doesn’t have to interview with you for a job. Your decision could change the direction of their life. Have a good Monday.
“You lucky I ain’t read wretched yet!”
Post Reply

Return to “POLITICS”