🎓

財商學院

正在載入課程內容...

Cognitive Biases: What They Are, Why They're Important
🎬 互動字幕 (171段)
0.0s
▶️ 播放中 - 點擊暫停
1x
00:00
KEVIN DELAPLANTE: This is The Critical Thinker, Episode 12.
凱文·德拉普蘭特:這是《批判性思考者》第十二集。
00:03
[MUSIC PLAYING] Hi everyone.
[音樂播放中] 大家好。
00:13
I'm Kevin, and this is The Critical Thinker podcast.
我是凱文,這是《批判性思考者》播客。
00:16
I teach philosophy for a living, and in my spare time I produce this podcast and videos for the tutorial site criticalthinkeracademy.com.
我以教授哲學為業,業餘時間則製作這個播客以及 criticalthinkeracademy.com 上的教學影片。
00:24
You might have noticed a name change since the last podcast.
你可能注意到自從上次播客後,網站名稱有所變更。
00:26
The old site was called criticalthinkingtutorials.com, which hosted both the podcast and the tutorial videos.
舊網站叫做 criticalthinkingtutorials.com,同時托管播客和教學影片。
00:34
But that site is no more.
但該網站已不復存在。
00:35
There have been some changes, which I'll elaborate on in a future episode.
發生了一些變動,我將在未來的集數中詳細說明。
00:39
But these podcast episodes are now hosted on their own site, www.criticalthinkerpodcast.com.
現在這些播客集數托管在它們自己的網站 www.criticalthinkerpodcast.com 上。
00:45
That's where you can go to view the show notes and leave comments on the podcast episodes.
你可以在那裡查看節目筆記並對播客集數留言。
00:50
You can find the tutorial videos at www.criticalthinkeracademy.com.
你可以在 www.criticalthinkeracademy.com 找到教學影片。
00:55
These are sister sites.
這兩個網站是姊妹站。
00:56
They link to each other, so you can move back and forth between them pretty easily.
它們互相連結,因此你可以輕鬆地在兩者之間切換。
01:00
Anyway, this episode and the next episode are about cognitive biases and their importance for critical thinking.
總之,這一集和下一集將探討認知偏誤及其對批判性思考的重要性。
01:06
In this episode, I'm going to give a top-level overview of what cognitive biases are and why they're important.
在這一集,我將對認知偏誤是什麼以及為何重要做一個高層次的概述。
01:12
And next episode, I'm going to give a bunch of interesting examples of cognitive biases in action.
下一集,我將舉出許多認知偏誤在實際作用中的有趣例子。
01:18
OK.
好的。
01:19
Everyone agrees that logic and argumentation are important for critical thinking.
大家都同意邏輯和論證對批判性思考很重要。
01:22
One of the things that I've tried to emphasize on the podcast is that background knowledge is also very important.
我在播客中一直試圖強調的一點是,背景知識也非常重要。
01:27
There are different types of background knowledge that are relevant to critical thinking in different ways.
有不同類型的背景知識,以不同方式與批判性思考相關。
01:33
One of the most important types of background knowledge is knowledge of how our minds actually work-- how human beings actually think and reason,
最重要的背景知識類型之一,就是了解我們的思維如何運作——人類實際如何思考與推理,
01:41
how we actually form beliefs, how we actually make decisions.
我們實際如何形成信念,
01:45
There are a lot of different scientific fields that study how our minds actually work.
有許多不同的科學領域在研究我們的思維如何運作。
01:50
There's behavioral psychology, social psychology, cognitive psychology, cognitive neuroscience-- a bunch of fields.
包括行為心理學、社會心理學、認知心理學、
01:56
Over the past 30 years, we've learned an awful lot about human reasoning and decision-making.
過去30年來,
02:01
Now, a lot of this research was stimulated by the work of two important researchers, Daniel Kahneman and Amos Tversky, going back to the early 1970s.
這項研究很大程度上受到兩位重要學者——丹尼爾·卡尼曼(Daniel Kahneman)與阿摩司·特沃斯基(Amos Tversky)——自1970年代初期起的研究成果所啟發。
02:11
They laid the foundations for what is now called the biases and heuristics tradition in psychology.
他們為現今心理學中稱為「偏誤與启发法」的傳統奠定了基礎。
02:17
Now to get a feel for the importance of this research, let's back up a bit.
要了解這項研究的重要性,我們先退一步思考。
02:20
When studying human reasoning, you can ask two sorts of questions.
在研究人類推理時,可以提出兩類問題。
02:23
One is a purely descriptive question.
一類是純描述性的問題。
02:26
How do human beings, in fact, reason?
人類實際上是如何推理的?
02:29
The other is a prescriptive or normative question.
另一類是規範性或應然性的問題。
02:32
How should human beings reason?
人類應該如何推理?
02:34
What's the difference between good reasoning and bad reasoning?
好的推理與壞的推理有何不同?
02:37
Now, when we study logic and argumentation, we're learning a set of rules and concepts that attempt to answer this second question, how should we
當我們學習邏輯與論證時,我們正在學習一套規則與概念,試圖回答這第二個問題:我們應該如何
02:44
reason?
推理?
02:45
These are elements of a broader normative theory of rationality, what it means to reason well.
這些是更廣泛的理性規範理論的要素,探討何謂良好的推理。
02:50
Actually, what we have is not one big theory of rationality, but a bunch of more narrowly focused theories that target reasoning in specific domains
實際上,我們擁有的並非一套單一的理性理論,而是許多更狹窄聚焦的理論,針對特定領域的推理
02:58
or under specific conditions.
或特定條件下的推理。
03:00
So for example, when we're interested in truth-preserving inferences, we appeal to deductive logic and interpret
例如,當我們關注的是保真推理(truth-preserving inferences)時,我們會訴諸演繹邏輯並加以詮釋。
03:06
the rules of deductive logic as norms for reasoning in this area.
演繹邏輯的規則
03:11
When we're interested in reasoning about chance and uncertainty, we appeal to probability theory and statistics to give us norms for correct reasoning
當我們有興趣
03:18
in this area.
在此領域。
03:19
If we're talking about making choices that will maximize certain goals under conditions of uncertainty, then we appeal to formal decision theory.
如果我們談論的是
03:26
If we add the situation where other actors are making choices that can affect the outcomes of our decisions, then we're moving into what's called game theory.
如果我們再加上一種情況,
03:35
So over time, we've developed a number of different theories of rationality that give us norms for correct reasoning in different domains.
因此,隨著時間推移,我們
03:42
Now, this is great, of course.
當然,這非常棒。
03:43
These are very powerful and useful tools.
這些是非常強大
03:46
Now, when it comes to the study of how human reasoning actually works, before Kahneman and Tversky's work in the 1970s, there was a widely shared view that more often than not,
現在,當談到研究
03:57
the mind, or the brain, processes information in ways that mimic the formal models of reasoning and decision-making that were familiar from our normative
心智或大腦
04:06
models of reasoning-- from formal logic, from probability theory, from decision theory.
推理模型——
04:11
What Kahneman and Tversky, showed is that more often than not, this is not the way our minds work.
卡尼曼與
04:17
They showed that there's a gap between how our normative theories say we should reason and how we in fact reason.
他們顯示出,在我們的規範理論
04:24
This gap can manifest itself in different ways, and there's no one single explanation for it.
這種差距可能以
04:29
One reason, for example is that in real world situations, the reasoning processes prescribed by our normative theories of rationality
舉例來說,原因之一是
04:37
can be computationally very intensive.
過程在計算上可能
04:40
Our brains would need to process an awful lot of information to implement our best normative theories of reasoning.
我們的大腦需要處理
04:45
But this kind of information processing takes time.
但這種資訊處理
04:47
And in the real world, we often need to make decisions much quicker, sometimes in milliseconds.
而在現實世界中,
04:52
You can imagine this time pressure being even more intense if you think about the situations facing our Homo sapien ancestors.
你可以想像,
04:59
If there's a big animal charging you, and you wait too long to figure out what to do, you're dead.
如果有一隻大型動物朝你衝來,而你猶豫太久、不知如何是好,你就死定了。
05:03
So the speculation is that our brains have evolved various shortcut mechanisms for making decisions, especially when the problems we're facing are complex,
所以有推測認為,我們的大腦已經演化出各種捷徑機制來做決策,特別是在我們面臨的問題很複雜、
05:13
we have incomplete information, and there's risk involved.
資訊不完整、
05:16
In these situations, we sample the information available to us.
且涉及風險的情況下。
05:19
We focus on just those bits that are most relevant to our decision task, and we make a decision based on a rule of thumb
在這些情況下,我們會從可用的資訊中進行抽樣,只專注於與決策任務最相關的片段,並根據經驗法則做出決定,
05:26
or a shortcut that just does the job.
或是採用能解決當下問題的捷徑。
05:29
These rules of thumb are the heuristics in the so-called biases and heuristics literature.
這些經驗法則,就是所謂「偏誤與捷徑」(biases and heuristics)文獻中的捷徑(heuristics)。
05:35
Two important things to note here.
這裡有兩點重要事項需要注意。
05:37
One is that we're usually not consciously aware of the heuristics that we're using, or the information that we're focusing on.
第一,我們通常不會意識到自己正在使用這些捷徑,也不會意識到我們正在關注哪些資訊。
05:43
Most of this is going on below the surface.
大部分的運作都在潛意識中進行。
05:46
The second thing to note is that these heuristics aren't designed to give us the best solutions to our decision problems, all things considered.
第二點要注意的是,這些捷徑的設計目的,並不是要在通盤考量下,為我們的決策問題提供最佳解法。
05:53
What they're designed to do is give us solutions that are good enough for immediate purposes.
它們的設計目的是提供「足夠好」、能應付眼前需求的解決方案。
05:58
But good enough might mean good enough in our ancestral environments, where these cognitive mechanisms evolved.
但所謂的「足夠好」,可能是指在我們祖先演化的環境中足夠好。
06:05
In contexts that are more removed from these ancestral environments, we can end up making systematically
在與祖先環境差異較大的情境下,我們最終可能會做出系統性的
06:11
bad choices or errors in reasoning because we're automatically, subconsciously invoking the heuristic in a situation where that heuristic isn't necessarily
錯誤選擇或推理失誤,因為我們會自動、潛意識地套用該捷徑,而該捷徑在當下情境中未必是
06:19
the best rule to follow.
最該遵循的準則。
06:21
So the term "bias," in this context, refers to the systematic gap between how we're actually disposed to behave or reason and how
因此,在此脈絡下,「偏誤」(bias)一詞指的是:我們實際行為或推理方式,與
06:30
we ought to behave or reason, by the standards of some normative theory of reasoning or decision-making.
某種規範性決策或推理理論所要求的行為或推理方式之間,存在的系統性落差。
06:35
The heuristic is the rule of thumb that we're using to make the decision or the judgment.
捷徑是我們用來做決策或判斷的經驗法則。
06:40
The bias is the predictable effect of using that rule of thumb in situations where it doesn't give an optimal result.
偏誤則是在不適合的情境下使用該經驗法則時,會導致可預測的不良後果。
06:46
I know this is all pretty general, so let me give you an example of a cognitive bias and its related heuristic.
我知道這些內容都相當籠統,所以我來舉個認知偏誤及其相關啟發法的例子。
06:51
This is known as the anchoring heuristic, or the anchoring effect.
這被稱為錨定啟發法,或稱為錨定效應。
06:55
Kahneman and Tversky, did a famous experiment where they asked a group of subjects to estimate the percentage of countries in Africa
卡尼曼(Kahneman)與特沃斯基(Tversky)曾做過一個著名的實驗,他們請一組受試者估計非洲國家中,
07:03
that are members of the United Nations.
加入聯合國的比例。
07:05
Of course, most of us aren't going to know this.
當然,我們大多數人並不知道這個數字。
07:07
For most of us, this is just going to be a guess.
對大多數人來說,這只能靠猜。
07:09
Now, for one group of subjects, they asked the question, is this percentage more or less than 10%?
當時,對其中一組受試者,他們問的問題是:這個比例是高於還是低於 10%?
07:16
For another group of subjects, they ask the question, is it more or less than 65%?
對另一組受試者,他們問的問題是:這個比例是高於還是低於 65%?
07:21
And the average of the answers of the two groups differed significantly.
兩組受試者的平均答案有顯著差異。
07:25
In the first group, the average answer was 25%.
第一組的平均答案是 25%。
07:29
In the second group, the average answer was 45%.
第二組的平均答案是 45%。
07:33
The second group estimated higher than the first group.
第二組的估計值高於第一組。
07:37
Why?
為什麼?
07:38
Well, this is what seems to be going on.
嗯,這似乎是背後的運作機制。
07:40
If subjects are exposed to a higher number, their estimates were anchored to that number.
如果受試者接觸到較高的數字,他們的估計值就會被「錨定」在那個數字上。
07:45
Give them a higher number, they estimate higher.
給他們較高的數字,他們就估計得較高。
07:47
Give them a lower number, they estimate lower.
給他們較低的數字,他們就估計得較低。
07:50
So the idea behind this anchoring heuristic is that when people are asked to estimate a probability or an uncertain number,
所以,這個錨定啟發法背後的概念是,當人們被要求估計一個機率或一個不確定的數字時,
07:57
rather than try to perform a complex calculation their heads, they start with an implicitly suggested reference point, the anchor, and make adjustments from that reference
他們並不會試圖在腦中進行複雜的計算,而是從一個隱含暗示的參考點(也就是錨點)開始,然後從那個參考點進行調整,
08:06
point to reach their estimate.
以得出他們的估計值。
08:08
This is a shortcut.
這是一個捷徑。
08:09
It's a rule of thumb.
這是一個經驗法則。
08:11
Now, you might think, in this case, it's not just the number.
現在,你可能會想,在這種情況下,這不僅僅是數字的問題。
08:13
It's the way the question is phrased that biased the estimates.
是問題的措辭方式讓估計產生了偏誤。
08:17
You might think the subjects are assuming that the researchers know the answer, and the reference number is therefore related in some way
你可能會認為受試者假設研究人員知道答案,因此參考數字與實際答案有某種關聯。
08:22
to the actual answer.
與實際答案有某種關聯。
08:24
But researchers have done this experiment many times in different ways.
但研究人員已經用不同的方式多次進行這個實驗。
08:28
In one version, for example, the subjects are asked the same question, to estimate the percentage of African nations in the UN.
例如,在一個版本中,受試者被問到相同的問題,估計聯合國中非洲國家的比例。
08:34
But before they answer, the researcher spins a roulette wheel in front of a group, waits for it to land on a number so they can all see the number,
但在他們回答之前,研究人員在一群人面前轉動輪盤,等待它停在某個數字,讓大家都能看到這個數字。
08:42
then asks them if the percentage of African nations is larger or smaller than the number on the roulette wheel.
然後問他們,非洲國家的比例是大於還是小於輪盤上的數字。
08:49
The results are the same.
結果是一樣的。
08:51
If the number is high, people estimate high.
如果數字很高,人們的估計就會偏高。
08:54
If the number is low, people estimate low.
如果數字很低,人們的估計就會偏低。
08:57
And in this case, the subjects couldn't possibly assume the number on the roulette wheel had any relation to the actual percentage of African nations in the UN,
在這種情況下,受試者不可能假設輪盤上的數字與聯合國中非洲國家的實際比例有任何關聯。
09:05
but their estimates were anchored to this number anyway.
但他們的估計還是被這個數字錨定了。
09:08
Now, results like these have proven to be really important for understanding how human beings process information
像這樣的結果已被證明非常重要,有助於理解人類如何處理資訊。
09:13
and make judgments on the basis of information.
以及如何基於資訊做出判斷。
09:16
The anchoring effect shows up in strategic negotiation behavior, consumer shopping behavior, in the behavior of stock and real estate markets--
錨定效應出現在策略性談判行為、消費者購物行為、股票和房地產市場的行為中——
09:23
it shows up everywhere.
它無所不在。
09:24
It's a very widespread and robust effect.
這是一個非常普遍且穩固的效應。
09:27
Now note also that this behavior is, by the standards of our normative theories of correct reasoning, systematically irrational.
現在也請注意,根據我們關於正確推理的規範性理論標準,這種行為是系統性非理性的。
09:35
This is an example of a cognitive bias.
這是認知偏誤的一個例子。
09:38
Now, this would be interesting but not deeply significant if the anchoring effect was the only cognitive bias that we discovered.
如果「錨定效應」是我們發現的唯一認知偏誤,那麼這雖然有趣,但並非具有深遠的意義。
09:45
But if you go to Wikipedia and type in, "list of cognitive biases," you'll find a page that lists just over 100 of these biases.
但如果你去維基百科輸入「認知偏誤列表」,你會找到一個列出超過 100 種這種偏誤的頁面。
09:54
And the list is not exhaustive.
而且這份清單並不完整。
09:55
I encourage everyone to check it out.
我鼓勵大家都去查看一下。
09:57
If you spend much time looking at these links, you'll get a crash course in cognitive biases.
如果你花很多時間看這些連結,你會對認知偏誤有一個速成的了解。
10:01
So what's the upshot of all this for us, as critical thinkers?
那麼,對於身為批判性思考者的我們來說,這一切的要點是什麼?
10:05
Well, I'm going to get into this a bit more in the next podcast episode.
嗯,我將在下一期播客節目中更深入探討這個問題。
10:08
But it's clear that at the very least, we all need to acquire a certain level of cognitive bias literacy.
但很明顯,至少我們都需要具備一定程度的認知偏誤知識。
10:14
We don't need to become experts, but we should all be able to recognize the most important and most discussed cognitive biases.
我們不需要成為專家,但我們都應該能夠辨識最重要且最常被討論的認知偏誤。
10:21
We should all know what confirmation bias is, what the base rate fallacy is, what the gambler's fallacy is, and so on.
我們都應該知道什麼是確認偏誤、什麼是基本比率謬誤、什麼是賭徒謬誤等等。
10:28
These are just as important as understanding the standard logical fallacies.
這些與理解標準的邏輯謬誤同樣重要。
10:32
Why?
為什麼?
10:32
Because as critical thinkers, we need to be aware of the processes that influence our judgments, especially if those processes systematically bias us
因為身為批判性思考者,我們需要意識到那些影響我們判斷的過程,特別是當這些過程系統性地使我們產生偏誤時。
10:40
in ways that make us prone to error and bad decisions.
這些偏誤會讓我們容易犯錯並做出糟糕的決定。
10:44
Also, we want to be on the lookout for conscious manipulation and exploitation of these biases by people who are in the influence business, whose
此外,我們也要提防那些從事影響力行業的人對這些偏誤進行有意識的操縱和利用,他們的工作就是
10:53
job it is to make people think and act in ways that further their interests rather than your interests.
讓人們以符合他們利益而非你利益的方式去思考和行動。
10:58
We know that marketing firms and political campaigns hire experts in these areas to help them craft their messages.
我們知道行銷公司和政治競選活動會聘請這些領域的專家來協助他們精心設計訊息。
11:05
Now, let me give you a hypothetical example, though I know some people who'd say this is not hypothetical.
現在,讓我給你一個假設性的例子,雖然我知道有些人會說這並非假設。
11:10
Let's say you're a media advisor to a government that has just conducted a major military strike on a foreign country, and there were civilian casualties
假設你是一位媒體顧問,服務於某個政府,該政府剛對外國發動了一場重大軍事打擊,並造成了平民傷亡。
11:18
resulting from the strike.
導致此次攻擊造成平民傷亡。
11:20
Now, if the number of civilians killed is high, then that's bad for the government.
現在,如果平民死亡人數很高,對政府來說是不利的。
11:24
It will be harder to maintain popular support for this action.
要維持民眾對此行動的支持將會更加困難。
11:27
Let's say our intelligence indicates that the number of casualties is in the thousands.
假設我們的情報顯示,傷亡人數在數千人左右。
11:32
This is not a good number.
這不是一個好看的數字。
11:33
It's going to be hard to sell this action if that's the number that everyone reads in the news the next day.
如果這是大家隔天在新聞上讀到的數字,要為此行動辯護將會很難。
11:38
So as an advisor to this government, what do you recommend doing?
所以,身為這位政府的顧問,你會建議怎麼做?
11:42
I'll tell you what I would do if all I cared about was furthering the government's interests.
我來告訴你,如果我只在乎推進政府利益的話,我會怎麼做。
11:47
I'd say, Mr.
我會說,總統先生——或者任何掌權者——我們需要在媒體掌握消息之前發表一份聲明。
11:47
President-- or whoever's in charge-- we need to issue a statement before the press gets ahold of this.
在這份聲明中,我們需要說,此次攻擊造成的預估傷亡人數很低,
11:53
And in this statement, we need to say that the number of estimated casualties resulting from this strike is low, maybe 100 or 200, at the most.
大概最多只有 100 或 200 人。
12:02
Now, why would I advise this?
現在,為什麼我會這樣建議?
12:04
Because I know about the anchoring effect.
因為我知道「錨定效應」(anchoring effect)。
12:06
I know that the public's estimate of the real number of casualties is going to be anchored to the first number they're
我知道公眾對真實傷亡人數的估計,將會被錨定在他們最初接觸到的數字上。
12:12
exposed to.
如果那個數字很低,他們的估計就會偏低。
12:12
And if that number's low, they'll estimate low.
即使後來出現的數據顯示數字更高,公眾的估計值
12:16
And if data eventually comes out with numbers that are higher, the public's estimates will still be lower than they would be if we
仍然會比我們沒有先發制人、
12:22
didn't get in there first and feed them that first low number.
灌輸給他們那個低數字時的估計來得低。
12:26
Now, that's what I would do, if all I cared about was manipulating public opinion.
這就是我會做的事,如果我只在乎操縱公眾輿論的話。
12:30
This is a hypothetical example, but trust me when I tell you that decisions like these are made every day under the advice of professionals
這是一個假設性的例子,但我向你保證,在專業人士的建議下,每天都在發生類似的決策
12:36
who are experts in this psychological literature.
這些專業人士是此心理學文獻的專家。
12:39
So there's a broader issue at stake.
因此,這涉及一個更廣泛的議題。
12:41
This is the kind of background knowledge that is important if our ultimate goal is to be able to claim ownership and responsibility
如果我們的最終目標是能夠宣稱擁有權並承擔責任,
12:47
for our own beliefs and values.
對於我們自己的信念和價值觀而言,
12:50
And that's what critical thinking is all about.
這些背景知識是非常重要的。
12:52
Well, that's going to wrap it up for this episode.
這就是批判性思維的全部意義所在。
12:55
In the next episode, I'm going to look at a few more case studies to help highlight how important cognitive biases are, and maybe give you some more incentive to look into them.
好了,本集到此結束。
13:03
I'll leave some links in the show notes to some online resources, which you can find at that criticalthinkerpodcast.com.
在下一集,我將探討更多案例研究,以突顯認知偏誤的重要性,或許能給你更多動力去深入研究。
13:10
Thanks for listening, and we'll see you next time.
我會在節目筆記中留下一些連結,指向一些線上資源,你可以在 criticalthinkerpodcast.com 找到。
13:12
[MUSIC PLAYING]
感謝收聽,我們下次見。

Cognitive Biases: What They Are, Why They're Important

📝 影片摘要

本單元深入解析「認知偏誤」(Cognitive Biases)的概念及其對批判性思考的重要性。課程首先區分了描述性(人類如何思考)與規範性(人類應如何思考)的思維研究,並引用丹尼爾·卡尼曼(Daniel Kahneman)與阿摩司·特沃斯基(Amos Tversky)的研究,說明大腦常使用「捷徑」(Heuristics)來快速決策。雖然捷徑在演化上能幫助生存,但在現代複雜環境中常導致系統性的「偏誤」。講者以「錨定效應」(Anchoring Effect)為例,說明初始數字如何影響後續判斷,並強調了解這些偏誤有助於我們避免錯誤決策及識別他人(如政治或行銷操作)的操縱。

📌 重點整理

  • 批判性思考不僅需要邏輯,更需要了解大腦運作的背景知識。
  • 人類思維存在「規範性理論」(應然)與「描述性現實」(實然)的差距。
  • 大腦演化出「捷徑」(Heuristics)以在資訊不完整時快速決策,但也因此產生「認知偏誤」。
  • 認知偏誤是指在不適用的情境下使用捷徑,導致可預測的系統性錯誤。
  • 「錨定效應」(Anchoring Effect):人們的估計會受到最先接觸到的數字(錨點)強烈影響。
  • 錨定效應即使在隨機數字(如輪盤)影響下依然存在,證明其非理性本質。
  • 了解認知偏誤能幫助我們識別並抵抗外界(如媒體、廣告)的操縱。
  • 掌握認知偏誤是承擔信念責任、成為真正批判性思考者的關鍵。
📖 專有名詞百科 |點擊詞彙查看維基百科解釋
哲學
Philosophy
認知的
Cognitive
描述的
Descriptive
規範的
Normative
推論
Inferences
計算的
Computational
啟發法
Heuristics
錨定
Anchoring
穩健的
Robust
假設的
Hypothetical

🔍 自訂查詢

📚 共 10 個重點單字
Philosophy /ˈfɪləsəfi/ noun
The study of the fundamental nature of knowledge, reality, and existence.
哲學;人生觀。
📝 例句
"I teach philosophy for a living."
我以教授哲學為業。
✨ 延伸例句
"His philosophy is to work hard and play hard."
他的人生哲學是努力工作並盡情玩樂。
Cognitive /ˈkɒɡnɪtɪv/ adjective
Relating to the mental processes of perception, memory, judgment, and reasoning.
認知的;認識的。
📝 例句
"This episode is about cognitive biases."
這個單元是關於認知偏誤。
✨ 延伸例句
"Cognitive psychology studies how we process information."
認知心理學研究我們如何處理資訊。
Descriptive /dɪˈskrɪptɪv/ adjective
Serving or seeking to describe.
描述的;說明的。
📝 例句
"One is a purely descriptive question."
一類是純描述性的問題。
✨ 延伸例句
"The report was purely descriptive, not analytical."
那份報告純屬描述性,缺乏分析。
Normative /ˈnɔːmətɪv/ adjective
Relating to a norm, especially an ideal standard.
規範的;標準的。
📝 例句
"The other is a prescriptive or normative question."
另一類是規範性或應然性的問題。
✨ 延伸例句
"We study normative theories of rationality."
我們研究理性的規範理論。
Inferences /ˈɪnfərənsɪz/ noun
Conclusions reached on the basis of evidence and reasoning.
推論;推理。
📝 例句
"We're interested in truth-preserving inferences."
我們感興趣的是保真推理。
✨ 延伸例句
"Logical inferences must be valid."
邏輯推論必須是有效的。
Computational /ˌkɒmpjəˈteɪʃənəl/ adjective
Relating to the use of computers.
計算的;電腦運算的。
📝 例句
"Can be computationally very intensive."
在計算上可能非常密集。
✨ 延伸例句
"Computational power has increased rapidly."
運算能力已經快速增長。
Heuristics /hjʊˈrɪstɪks/ noun
Experience-based techniques for problem solving.
啟發法;捷徑。
📝 例句
"These rules of thumb are the heuristics."
這些經驗法則就是所謂的捷徑。
✨ 延伸例句
"Mental heuristics help us make quick decisions."
心理捷徑幫助我們做快速決策。
Anchoring /ˈæŋkərɪŋ/ noun
The cognitive bias where individuals depend too heavily on an initial piece of information.
錨定(效應)。
📝 例句
"This is known as the anchoring heuristic."
這被稱為錨定啟發法。
✨ 延伸例句
"Anchoring can affect negotiations significantly."
錨定效應會顯著影響談判。
Robust /rəˈbʌst/ adjective
Strong and healthy; vigorous.
穩健的;強健的。
📝 例句
"It's a very widespread and robust effect."
這是一個非常普遍且穩固的效應。
✨ 延伸例句
"The software is robust and reliable."
這款軟體非常穩健可靠。
Hypothetical /ˌhaɪpəˈθetɪkəl/ adjective
Based on or serving as a hypothesis.
假設的;假說的。
📝 例句
"Let me give you a hypothetical example."
讓我給你一個假設性的例子。
✨ 延伸例句
"It is a hypothetical situation, not a real one."
這是一個假設的情境,而非真實的。
🎯 共 10 題測驗

1 What is Kevin DeLaplante's primary profession? 凱文·德拉普蘭特的職業是什麼? What is Kevin DeLaplante's primary profession?

凱文·德拉普蘭特的職業是什麼?

✅ 正確! ❌ 錯誤,正確答案是 B

Kevin states in the introduction that he teaches philosophy for a living.

凱文在開頭提到他以教授哲學為業。

2 Who are the two researchers that laid the foundations for the 'biases and heuristics' tradition? 為「偏誤與捷徑」傳統奠定基礎的兩位研究者是誰? Who are the two researchers that laid the foundations for the 'biases and heuristics' tradition?

為「偏誤與捷徑」傳統奠定基礎的兩位研究者是誰?

✅ 正確! ❌ 錯誤,正確答案是 B

The video credits Daniel Kahneman and Amos Tversky for stimulating this research starting in the early 1970s.

影片指出丹尼爾·卡尼曼與阿摩司·特沃斯基自1970年代初期的研究奠定了此基礎。

3 What distinguishes a 'normative' question from a 'descriptive' question? 「規範性」問題與「描述性」問題有何不同? What distinguishes a 'normative' question from a 'descriptive' question?

「規範性」問題與「描述性」問題有何不同?

✅ 正確! ❌ 錯誤,正確答案是 A

Descriptive asks how humans actually reason, while normative asks how they should reason.

描述性探討人類實際如何推理,而規範性探討人類應該如何推理。

4 Why do humans use 'heuristics' (mental shortcuts)? 人類為什麼使用「捷徑」(心理捷徑)? Why do humans use 'heuristics' (mental shortcuts)?

人類為什麼使用「捷徑」(心理捷徑)?

✅ 正確! ❌ 錯誤,正確答案是 B

Normative theories are computationally intensive, and evolution favored shortcuts for quick decision-making in risky situations.

規範性理論計算量大,而演化偏好捷徑以便在風險情境下快速決策。

5 In the UN African nations experiment, what influenced the participants' estimates? 在聯合國非洲國家的實驗中,什麼影響了參與者的估計? In the UN African nations experiment, what influenced the participants' estimates?

在聯合國非洲國家的實驗中,什麼影響了參與者的估計?

✅ 正確! ❌ 錯誤,正確答案是 B

Participants' estimates were anchored to the random numbers suggested by the researchers (10% or 65%).

參與者的估計被研究者提示的隨機數字(10% 或 65%)所錨定。

6 Which of the following is NOT mentioned as an area where the anchoring effect appears? 以下哪一項未被提及為錨定效應出現的領域? Which of the following is NOT mentioned as an area where the anchoring effect appears?

以下哪一項未被提及為錨定效應出現的領域?

✅ 正確! ❌ 錯誤,正確答案是 D

The video mentions negotiation, shopping, and stock markets, but not genetics.

影片提到了談判、購物和股市,但未提到基因遺傳。

7 What does the term 'bias' refer to in this context? 在此脈絡下,「偏誤」指的是什麼? What does the term 'bias' refer to in this context?

在此脈絡下,「偏誤」指的是什麼?

✅ 正確! ❌ 錯誤,正確答案是 B

Bias refers to the systematic gap between how we actually reason and how we should reason by normative standards.

偏誤指的是我們實際推理方式與規範性標準應有的推理方式之間的系統性差距。

8 According to the video, why is knowledge of cognitive biases important for critical thinkers? 根據影片,為什麼了解認知偏誤對批判性思考者很重要? According to the video, why is knowledge of cognitive biases important for critical thinkers?

根據影片,為什麼了解認知偏誤對批判性思考者很重要?

✅ 正確! ❌ 錯誤,正確答案是 B

It helps us be aware of processes that bias us and spot manipulation by those in the 'influence business'.

這有助於我們意識到使我們產生偏誤的過程,並識別「影響力行業」人士的操縱。

9 What is the hypothetical scenario regarding a government military strike? 關於政府軍事打擊的假設性情境是什麼? What is the hypothetical scenario regarding a government military strike?

關於政府軍事打擊的假設性情境是什麼?

✅ 正確! ❌ 錯誤,正確答案是 A

The advisor suggests releasing a low casualty number first to anchor public estimates lower.

顧問建議先發布低傷亡數字,以將公眾的估計錨定在較低的數值。

10 What is the ultimate goal of understanding these background concepts? 了解這些背景概念的最終目標是什麼? What is the ultimate goal of understanding these background concepts?

了解這些背景概念的最終目標是什麼?

✅ 正確! ❌ 錯誤,正確答案是 A

The speaker concludes that this knowledge is vital for claiming ownership and responsibility for our own beliefs and values.

演講者總結道,這些知識對於宣稱擁有並承擔我們自身信念與價值觀的責任至關重要。

測驗完成!得分: / 10