Clicker Questions
to go along with
Modern Data Science with R, 3rd edition by Baumer, Kaplan, and Horton
Introduction to Statistical Learning with Applications in R by James, Witten, Hastie, and Tibshirani
- The reason to take random samples is:1
- to make cause and effect conclusions
- to get as many variables as possible
- it’s easier to collect a large dataset
- so that the data are a good representation of the population
- I have no idea why one would take a random sample
- The reason to allocate/assign explanatory variables is:2
- to make cause and effect conclusions
- to get as many variables as possible
- it’s easier to collect a large dataset
- so that the data are a good representation of the population
- I have no idea what you mean by “allocate/assign” (or “explanatory variable” for that matter)
- Approximately how big is a tweet?3
- 0.01Kb
- 0.1Kb
- 1Kb
- 100Kb
- 1000Kb = 1Mb
- \(R^2\) measures:4
- the proportion of variability in vote margin as explained by tweet share.
- the proportion of variability in tweet share as explained by vote margin.
- how appropriate the linear part of the linear model is.
- whether or not particular variables should be included in the model.
- R / R Studio / Quarto5
- all good
- started, progress is slow and steady
- started, very stuck
- haven’t started yet
- what do you mean by “R”?
- Git / GitHub6
- all good
- started, progress is slow and steady
- started, very stuck
- haven’t started yet
- what do you mean by “Git”?
- Which of the following includes talking to the remote version of GitHub?7
- changing your name (updating the YAML)
- committing the file(s)
- pushing the file(s)
- some of the above
- all of the above
- What is the error?8
- poor assignment operator
- unmatched quotes
- improper syntax for function argument
- invalid object name
- no mistake
- What is the error?9
- poor assignment operator
- unmatched quotes
- improper syntax for function argument
- invalid object name
- no mistake
- What is the error?10
- poor assignment operator
- unmatched quotes
- improper syntax for function argument
- invalid object name
- no mistake
- What is the error?11
- poor assignment operator
- unmatched quotes
- improper syntax for function argument
- invalid object name
- no mistake
- What is the error?12
- poor assignment operator
- unmatched quotes
- improper syntax for function argument
- invalid object name
- no mistake
- Do you keep a calendar / schedule / planner?13
- Yes
- No
- Do you keep a calendar / schedule / planner? If you answered “Yes” …14
- Yes, on Google Calendar
- Yes, on Calendar for macOS
- Yes, on Outlook for Windows
- Yes, in some other app
- Yes, by hand
- Where should I put things I’ve created for the HW (e.g., data, .ics file, etc.)15
- Upload into remote GitHub directory
- In the local folder which also has the R project
- In my Downloads
- Somewhere on my Desktop
- In my Home directory
- The goal of making a figure is…16
- To draw attention to your work.
- To facilitate comparisons.
- To provide as much information as possible.
- A good reason to make a particular choice of a graph is:17
- Because the journal / field has particular expectations for how the data are presented.
- Because some variables naturally fit better on some graphs (e.g., numbers on scatter plots).
- Because that graphic displays the message you want as optimally as possible.
- Why are the points orange?18
- R translates “navy” into orange.
- color must be specified in
geom_point()
- color must be specified outside the
aes()
function - the default plot color is orange
- Why are the dots blue and the lines colored?19
- dot color is given as “navy”, line color is given as
wday
. - both colors are specified in the
ggplot()
function. - dot coloring takes precedence over line coloring.
- line coloring takes precedence over dot coloring.
- dot color is given as “navy”, line color is given as
- Setting vs. Mapping. If I want information to be passed to all data points (not variable):20
- map the information inside the
aes()
function. - set the information outside the
aes()
function
- map the information inside the
- The Snow figure was most successful at:21
- making the data stand out
- facilitating comparison
- putting the work in context
- simplifying the story
- The Challenger figure(s) was(were) least successful at:22
- making the data stand out
- facilitating comparison
- putting the work in context
- simplifying the story
- The biggest difference between Snow and the Challenger was:23
- The amount of information portrayed.
- One was better at displaying cause.
- One showed the relevant comparison better.
- One was more artistic.
- Caffeine and Calories. What was the biggest concern over the average value axes?24
- It isn’t at the origin.
- They should have used all the data possible to find averages.
- There wasn’t a random sample.
- There wasn’t a label explaining why the axes were where they were.
- What is wrong with the following code?25
- should only be one =
- Sydney should be lower case
- name should not be in quotes
- use mutate instead of filter
- babynames in wrong place
- Which data represents the ideal format for ggplot2 and dplyr?26
year | Algeria | Brazil | Columbia |
---|---|---|---|
2000 | 7 | 12 | 16 |
2001 | 9 | 14 | 18 |
country | Y2000 | Y2001 |
---|---|---|
Algeria | 7 | 9 |
Brazil | 12 | 14 |
Columbia | 16 | 18 |
country | year | value |
---|---|---|
Algeria | 2000 | 7 |
Algeria | 2001 | 9 |
Brazil | 2000 | 12 |
Brazil | 2001 | 14 |
Columbia | 2000 | 16 |
Columbia | 2001 | 18 |
- Each of the statements except one will accomplish the same calculation. Which one does not match?27
#(a)
babynames |>
group_by(year, sex) |>
summarize(totalBirths = sum(num))
#(b)
group_by(babynames, year, sex) |>
summarize(totalBirths = sum(num))
#(c)
group_by(babynames, year, sex) |>
summarize(totalBirths = mean(num))
#(d)
temp <- group_by(babynames, year, sex)
summarize(temp, totalBirths = sum(num))
#(e)
summarize(group_by(babynames, year, sex),
totalBirths = sum(num))
- Fill in Q1.28
filter()
arrange()
select()
mutate()
group_by()
- Fill in Q2.29
(year, sex)
(year, name)
(year, num)
(sex, name)
(sex, num)
- Fill in Q3.30
n_distinct(name)
n_distinct(n)
sum(name)
sum(num)
mean(num)
- Running the code.31
babynames <- babynames::babynames |>
rename(num = n)
babynames |>
filter(name %in% c("Jane", "Mary")) |>
# just the Janes and Marys
group_by(name, year) |>
# for each year for each name
summarize(total = sum(num))
# A tibble: 276 × 3
# Groups: name [2]
name year total
<chr> <dbl> <int>
1 Jane 1880 215
2 Jane 1881 216
3 Jane 1882 254
4 Jane 1883 247
5 Jane 1884 295
6 Jane 1885 330
7 Jane 1886 306
8 Jane 1887 288
9 Jane 1888 446
10 Jane 1889 374
# ℹ 266 more rows
babynames |>
filter(name %in% c("Jane", "Mary")) |>
group_by(name, year) |>
summarize(number = sum(num))
# A tibble: 276 × 3
# Groups: name [2]
name year number
<chr> <dbl> <int>
1 Jane 1880 215
2 Jane 1881 216
3 Jane 1882 254
4 Jane 1883 247
5 Jane 1884 295
6 Jane 1885 330
7 Jane 1886 306
8 Jane 1887 288
9 Jane 1888 446
10 Jane 1889 374
# ℹ 266 more rows
babynames |>
filter(name %in% c("Jane", "Mary")) |>
group_by(name, year) |>
summarize(n_distinct(name))
# A tibble: 276 × 3
# Groups: name [2]
name year `n_distinct(name)`
<chr> <dbl> <int>
1 Jane 1880 1
2 Jane 1881 1
3 Jane 1882 1
4 Jane 1883 1
5 Jane 1884 1
6 Jane 1885 1
7 Jane 1886 1
8 Jane 1887 1
9 Jane 1888 1
10 Jane 1889 1
# ℹ 266 more rows
babynames |>
filter(name %in% c("Jane", "Mary")) |>
group_by(name, year) |>
summarize(n_distinct(num))
# A tibble: 276 × 3
# Groups: name [2]
name year `n_distinct(num)`
<chr> <dbl> <int>
1 Jane 1880 1
2 Jane 1881 1
3 Jane 1882 1
4 Jane 1883 1
5 Jane 1884 1
6 Jane 1885 1
7 Jane 1886 1
8 Jane 1887 1
9 Jane 1888 1
10 Jane 1889 1
# ℹ 266 more rows
Error in `summarize()`:
ℹ In argument: `sum(name)`.
ℹ In group 1: `name = "Jane"` and `year = 1880`.
Caused by error in `base::sum()`:
! invalid 'type' (character) of argument
# A tibble: 276 × 3
# Groups: name [2]
name year `mean(num)`
<chr> <dbl> <dbl>
1 Jane 1880 215
2 Jane 1881 216
3 Jane 1882 254
4 Jane 1883 247
5 Jane 1884 295
6 Jane 1885 330
7 Jane 1886 306
8 Jane 1887 288
9 Jane 1888 446
10 Jane 1889 374
# ℹ 266 more rows
# A tibble: 276 × 3
# Groups: name [2]
name year `median(num)`
<chr> <dbl> <dbl>
1 Jane 1880 215
2 Jane 1881 216
3 Jane 1882 254
4 Jane 1883 247
5 Jane 1884 295
6 Jane 1885 330
7 Jane 1886 306
8 Jane 1887 288
9 Jane 1888 446
10 Jane 1889 374
# ℹ 266 more rows
- Fill in Q1.32
gdp
year
gdpval
country
–country
- Fill in Q2.33
gdp
year
gdpval
country
–country
- Fill in Q3.34
gdp
year
gdpval
country
–country
- Response to stimulus (in ms) after only 3 hrs of sleep for 9 days. You want to make a plot with the subject’s reaction time (y-axis) vs the number of days of sleep restriction (x-axis) using the following
ggplot()
code. Which data frame should you use?35- use raw data
- use
pivot_wider()
on raw data - use
pivot_longer()
on raw data
# A tibble: 18 × 11
Subject day_0 day_1 day_2 day_3 day_4 day_5 day_6 day_7 day_8 day_9
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 308 250. 259. 251. 321. 357. 415. 382. 290. 431. 466.
2 309 223. 205. 203. 205. 208. 216. 214. 218. 224. 237.
3 310 199. 194. 234. 233. 229. 220. 235. 256. 261. 248.
4 330 322. 300. 284. 285. 286. 298. 280. 318. 305. 354.
5 331 288. 285 302. 320. 316. 293. 290. 335. 294. 372.
6 332 235. 243. 273. 310. 317. 310 454. 347. 330. 254.
7 333 284. 290. 277. 300. 297. 338. 332. 349. 333. 362.
8 334 265. 276. 243. 255. 279. 284. 306. 332. 336. 377.
9 335 242. 274. 254. 271. 251. 255. 245. 235. 236. 237.
10 337 312. 314. 292. 346. 366. 392. 404. 417. 456. 459.
11 349 236. 230. 239. 255. 251. 270. 282. 308. 336. 352.
12 350 256. 243. 256. 256. 269. 330. 379. 363. 394. 389.
13 351 251. 300. 270. 281. 272. 305. 288. 267. 322. 348.
14 352 222. 298. 327. 347. 349. 353. 354. 360. 376. 389.
15 369 272. 268. 257. 278. 315. 317. 298. 348. 340. 367.
16 370 225. 235. 239. 240. 268. 344. 281. 348. 365. 372.
17 371 270. 272. 278. 282. 279. 285. 259. 305. 351. 369.
18 372 269. 273. 298. 311. 287. 330. 334. 343. 369. 364.
sleep_long <- sleep_wide |>
pivot_longer(cols = -Subject,
names_to = "day",
names_prefix = "day_",
values_to = "reaction_time")
sleep_long
# A tibble: 180 × 3
Subject day reaction_time
<dbl> <chr> <dbl>
1 308 0 250.
2 308 1 259.
3 308 2 251.
4 308 3 321.
5 308 4 357.
6 308 5 415.
7 308 6 382.
8 308 7 290.
9 308 8 431.
10 308 9 466.
# ℹ 170 more rows
- Consider band members from the Beatles and the Rolling Stones. Who is removed in a
right_join()
?36
- Mick
- John
- Paul
- Keith
- Impossible to know
- Consider band members from the Beatles and the Rolling Stones. Which variables are removed in a
right_join()
?37
name
band
plays
- none of them
- What happens to Mick’s
plays
variable in afull_join()
?38
- Mick is removed
- changes to guitar
- changes to bass
NA
NULL
- Consider the
addTen()
function. The following output is a result of whichmap_*()
call?39
map(c(1,4,7), addTen)
map_dbl(c(1,4,7), addTen)
map_chr(c(1,4,7), addTen)
map_lgl(c(1,4,7), addTen)
[1] "11.000000" "14.000000" "17.000000"
- Which of the following input is allowed?40
map(c(1, 4, 7), addTen)
map(list(1, 4, 7), addTen)
map(data.frame(a=1, b=4, c=7), addTen)
- some of the above
- all of the above
- Which of the following produces a different output?41
map(c(1, 4, 7), addTen)
map(c(1, 4, 7), ~addTen(.x))
map(c(1, 4, 7), ~addTen)
map(c(1, 4, 7), function(hi) (hi + 10))
map(c(1, 4, 7), ~(.x + 10))
- What will the following code output?42
- 3 random normals
- 6 random normals
- 18 random normals
- In R the
ifelse()
function takes the arguments:43
- question, yes, no
- question, no, yes
- statement, yes, no
- statement, no, yes
- option1, option2, option3
- What is the output of the following:44
- “cat”, 30, “cat”, “cat”, 6
- “cat”, “30”, “cat”, “cat”, “6”
- 1, “cat”, 5, “cat”, “cat”
- 1, “cat”, 5, NA, “cat”
- “1”, “cat”, “5”, NA, “cat”
- In R, the
set.seed()
function45
- makes your computations go faster
- keeps track of your computation time
- provides an important parameter
- repeats the function
- makes your results reproducible
- If I run a hypothesis test with a type I error cut off of \(\alpha = 0.05\) and the null hypothesis is true, what is the probability of rejecting \(H_0\)?46
- 0.01
- 0.05
- 0.1
- I don’t know.
- No one knows.
- If I run a hypothesis test with a type I error cut off of \(\alpha = 0.05\) and the null hypothesis is true, and also the technical conditions do not hold what is the probability of rejecting \(H_0\)?47
- 0.01
- 0.05
- 0.1
- I don’t know.
- No one knows.
- If I run a hypothesis test with a type I error cut off of \(\alpha = 0.05\) and the null hypothesis is false, what is the probability of rejecting \(H_0\)?48
- 0.01
- 0.05
- 0.1
- I don’t know.
- No one knows.
- If I aim to create a 95% confidence interval, and the technical conditions hold, what is the probability that the CI will contain the true value of the parameter?49
- 0.90
- 0.95
- 0.99
- I don’t know.
- No one knows.
- If I aim to create a 95% confidence interval, and the technical conditions do not hold, what is the probability that the CI will contain the true value of the parameter?50
- 0.90
- 0.95
- 0.99
- I don’t know.
- No one knows.
- We typically compare means (across two groups) instead of medians because:51
- we don’t know the SE of the difference of medians
- means are inherently more interesting than medians
- permutation tests don’t work with medians
- the Central Limit Theorem doesn’t apply for medians.
- What are the technical assumptions for a t-test?52
- none
- normal data
- \(n \geq 30\)
- random sampling / random allocation for appropriate conclusions
- What are the technical conditions for permutation tests?53
- none
- normal data
- \(n \geq 30\)
- random sampling / random allocation for appropriate conclusions
- Follow up to permutation test: the assumptions change based on whether the statistic used is the mean, median, proportion, etc.54
- TRUE
- FALSE
- Why care about the distribution of the test statistic?55
- Better estimator
- So we can find rejection region
- So we can control power
- Because we love the CLT
- Given statistic T = r(X), how do we find a (sensible) test?56
- Maximize power
- Minimize type I error
- Control type I error
- Minimize type II error
- Control type II error
- Type I error is57
- We give him a raise when he deserves it.
- We don’t give him a raise when he deserves it.
- We give him a raise when he doesn’t deserve it.
- We don’t give him a raise when he doesn’t deserve it.
- Type II error is58
- We give him a raise when he deserves it.
- We don’t give him a raise when he deserves it.
- We give him a raise when he doesn’t deserve it.
- We don’t give him a raise when he doesn’t deserve it.
- Power is the probability that:59
- We give him a raise when he deserves it.
- We don’t give him a raise when he deserves it.
- We give him a raise when he doesn’t deserve it.
- We don’t give him a raise when he doesn’t deserve it.
- Why don’t we always reject \(H_0\)?60
- type I error too high
- type II error too high
- level of sig too high
- power too high
- The player is more worried about61
- A type I error
- A type II error
- The coach is more worried about62
- A type I error
- A type II error
- Increasing your sample size63
- Increases your power
- Decreases your power
- Making your significance level more stringent (\(\alpha\) smaller)64
- Increases your power
- Decreases your power
- A more extreme alternative65
- Increases your power
- Decreases your power
- What is the primary reason to use a permutation test (instead of a test built on calculus)?66
- more power
- lower type I error
- more resistant to outliers
- can be done on statistics with unknown sampling distributions
- What is the primary reason to bootstrap a CI (instead of creating a CI from calculus)?67
- larger coverage probabilities
- narrower intervals
- more resistant to outliers
- can be done on statistics with unknown sampling distributions
- Which of the following could not possibly be a bootstrap sample from the vector:
c(4, 10, 8, 1, 2, 4)
68c(4, 4, 4, 4, 4, 4)
c(4, 10, 8, 1, 2, 4)
c(1, 2, 2, 4, 4, 2)
c(10, 8, 1, 1, 8, 10)
c(1, 2, 4, 3, 4, 10)
- You have a sample of size n = 50. You sample with replacement 1000 times to get 1000 bootstrap samples. What is the sample size of each bootstrap sample?69
- 50
- 1000
- You have a sample of size n = 50. You sample with replacement 1000 times to get 1000 bootstrap samples. How many bootstrap statistics will you have?70
- 50
- 1000
- The bootstrap distribution is centered around the71
- population parameter
- sample statistic
- bootstrap statistic
- bootstrap parameter
- 95% CI for the difference in proportions:72
- (0.15, 0.173)
- (0.025, 0.975)
- (0.72, 0.87)
- (0.70, 0.873)
- (0.12, 0.179)
- Suppose a 95% bootstrap CI for the difference in trimmed means was (3,9), would you reject H0?73 (uh… What is the null hypothesis here???)
- yes
- no
- not enough information to know
- Given the situation where \(H_a\) is TRUE. Consider 100 CIs (for true difference in means, where each of the 100 CIs is created using a different dataset). The power of the test can be approximated by:74
- The proportion that contain the true difference in means.
- The proportion that do not contain the true difference in means.
- The proportion that contain zero.
- The proportion that do not contain zero.
- Small k in k-NN will help reduce the risk of overfitting.75
- TRUE
- FALSE
- The training error for 1-NN classifier is zero.76
- TRUE
- FALSE
- Generally, the k-NN algorithm can take any distance measure.77
- TRUE
- FALSE
- In R, the
kknn
method can use any distance measure.78
- TRUE
- FALSE
- The
k
in k-NN refers to79k
groupsk
partitionsk
neighbors
- the
V
in V-fold CV refers to80V
groupsV
partitionsV
neighbors
- All of the following are TRUE for the use of CART except for:81
- Can deal with missing data
- Require the assumptions of statistical models
- Variable selection is automatic
- Produce rules that are easy to interpret and implement
- Simplifying the decision tree by pruning peripheral branches will cause overfitting.82
- TRUE
- FALSE
- All are true with regards to PRUNING except:83
- Multiple (sequential) trees are possible to create by pruning
- CART lets tree grow to full extent, then prunes it
- Pruning generates successively smaller trees by pruning leaves
- Pruning is only beneficial when purity improvement is statistically significant
- Regression trees are invariant to monotonic transformations of:84
- the explanatory (predictor) variables
- the response variable
- both types of variables
- none of the variables
- CART suffers from85
- high variance
- high bias
- bagging uses bootstrapping on:86
- the variables
- the observations
- both
- neither
- oob samples87
- are in the test data
- are in the training data and provide independent predictions
- are in the training data but do not provide independent predictions
- oob samples are great because88
- oob is “boo” spelled backwards
- oob samples allow for independent predictions
- oob samples allow for more predictions than a “test group”
- oob data frame is always bigger than the test sample data frame
- some of the above
- bagging is random forests with:89
- m = # predictor variables
- all the observations
- the most important predictor variables isolated
- cross validation to choose m
- We have 80 training observations and 20 test observations. To get the test MSE, we need90
- 20 predictions from all trees
- 20 predictions from oob trees
- 80 predictions form all trees
- 80 predictions from oob trees
- With random forests, the value for m is chosen91
- using OOB error rate
- as p/3
- as sqrt(p)
- using cross validation
- A tuning parameter:92
- makes the model fit the training data as well as possible.
- makes the model fit the test data as well as possible.
- allows for a good model that does not overfit the data.
- With binary response and X1 and X2 continuous, kNN (k=1) creates a linear decision boundary.93
- TRUE
- FALSE
- With binary response and X1 and X2 continuous, a classification tree with one split creates a linear decision boundary.94
- TRUE
- FALSE
- With binary response and X1 and X2 continuous, a classification tree with one split creates the best linear decision boundary.95
- TRUE
- FALSE
- If the data are linearly separable, there exists a “widest street”.96
- yes
- no
- up to a constant
- with the alpha values “tuned” appropriately
- In the case of linearly separable data, the SVM:97
- has a tuning parameter of \(\alpha\)
- has a tuning parameter of dimension
- has no tuning parameters
- Linear models are similar to SVM with linearly separable data in that they optimize the model instead of tuning it.98
- TRUE
- FALSE
- If the data have a complex boundary, the value of gamma in RBF kernel should be:99
- Big
- Small
- If the data have a simple boundary, the value of gamma in RBF kernel should be:100
- Big
- Small
- If I am using all features of my dataset and I achieve 100% accuracy on my training set, but ~70% on the cross validation set, what should I look out for?101
- Underfitting
- Nothing, the model is perfect
- Overfitting
- For a large value of C, the model is expected to102
- overfit the training data more as compared to a small C
- overfit the training data less as compared to a small C
- not related to overfitting the data
- Suppose you have trained an SVM with linear decision boundary. You correctly infer that your training SVM model is underfitting. Which of the following should you consider?103
- increase number of observations
- decrease number of observations
- calculate more variables
- reduce the number of features
- Suppose you have trained an SVM with linear decision boundary. You correctly infer that your training SVM model is underfitting. Suppose you gave the correct answer in previous question. What do you think that is actually happening (when you take that action)?104
- We are lowering the bias
- We are lowering the variance
- We are increasing the bias
- We are increasing the variance
- and ii.
- and iii.
- and iv.
- and iv.
- Suppose you are using SVM with polynomial kernel of degree 2. Your model perfectly predicts! That is, training and testing accuracy are both 100%. You increase the complexity (degree of polynomial). What will happen?105
- Increasing the complexity will overfit the data (increase variance)
- Increasing the complexity will underfit the data (increase bias)
- Nothing will happen since your model was already 100% accurate
- None of these
- Building on the previous question, after increasing the complexity, you found that training accuracy was still 100%. According to you what is the reason behind that?106
- Since data are fixed and we are fitting more polynomial terms, the algorithm starts memorizing everything in the data
- Since data are fixed, SVM doesn’t need to search in high dimensional space
- and ii.
- none of these
- The cost parameter in the SVM means:107
- The number of cross-validations to be made
- The kernel to be used
- The trade-off between misclassification and simplicity of the model
- None of the above
- Suppose you have trained an SVM classifier with a Gaussian kernel, and it learned the following decision boundary on the training set. You suspect that the SVM is underfitting your dataset. What should you try?108
- decreasing C and/or decrease gamma
- decreasing C and/or increase gamma
- increasing C and/or decrease gamma
- increasing C and/or increase gamma
- Suppose you have trained an SVM classifier with a Gaussian kernel, and it learned the following decision boundary on the training set. When you measure the SVM’s performance on a cross validation set, it does poorly. What should you try?109
- decreasing C and/or decrease gamma
- decreasing C and/or increase gamma
- increasing C and/or decrease gamma
- increasing C and/or increase gamma
- Cross validation will guarantee that the model does not overfit.110
- TRUE
- FALSE
- The biggest problem with missing data is the resulting small sample size.111
- TRUE
- FALSE
- Which statement is not true about cluster analysis?112
- Objects in each cluster tend to be similar to each other and dissimilar to objects in the other clusters.
- Cluster analysis is a type of unsupervised learning.
- Groups or clusters are suggested by the data, not defined a priori.
- Cluster analysis is a technique for analyzing data when the response variable is categorical and the predictor variables are continuous in nature.
- A _____ or tree graph is a graphical device for displaying clustering results. Vertical lines represent clusters that are joined together. The position of the line on the scale indicates the distances at which clusters were joined. 113
- dendrogram
- scatterplot
- scree plot
- segment plot
- _____ is a clustering procedure characterized by the development of a tree-like structure.114
- Partitioning clustering
- Hierarchical clustering
- Divisive clustering
- Agglomerative clustering
- Partitioning clustering
- _____ is a clustering procedure where all objects start out in one giant cluster. Clusters are formed by dividing this cluster into smaller and smaller clusters.115
- Non-hierarchical clustering
- Hierarchical clustering
- Divisive clustering
- Agglomerative clustering
- The _____ method uses information on all pairs of distances, not merely the minimum or maximum distances.116
- single linkage
- medium linkage
- complete linkage
- average linkage
- Hierarchical clustering is deterministic, but k-means clustering is not.117
- TRUE
- FALSE
- k-means is a clustering procedure characterized referred to as ________.118
- Partitioning clustering
- Hierarchical clustering
- Divisive clustering
- Agglomerative clustering
- Partitioning clustering
- One method of assessing reliability and validity of clustering is to use different methods of clustering and compare the results.119
- TRUE
- FALSE
- The choice of k, the number of clusters to partition a set of data into,…120
- is a personal choice that shouldn’t be discussed in public
- depends on why you are clustering the data
- should always be as large as your computer system can handle
- has maximum 10
- Which of the following is required by k-means clustering?121
- defined distance metric
- number of clusters
- initial guess as to cluster centroids
- all of the above
- some of the above
- For which of the following tasks might clustering be a suitable approach?122
- Given sales data from many products in a supermarket, estimate future sales for each of these products.
- Given a database of information about your users, automatically group them into different market segments.
- From the user’s usage patterns on a website, identify different user groups.
- Given historical weather records, predict if tomorrow’s weather will be sunny or rainy.
- k-means is an iterative algorithm, and two of the following steps are repeatedly carried out. Which two?123
- Assign each point to its nearest cluster
- Test on the cross-validation set
- Update the cluster centroids based the current assignment
- Using the elbow method to choose K
- 1 & 2
- 1 & 3
- 1 & 4
- 2 & 3
- 2 & 4
- 3 & 4
:::
Footnotes
- so that the data are a good representation of the population
- to make cause and effect conclusions
- about 0.1Kb. Turns out that 3.5 billion tweets * 0.1Kb = 350Gb (0.35 Tb). My laptop is pretty good, and it has 36 Gb of memory (RAM) and 4 Tb of storage. It would not be able to work with 3.5 billion tweets.
- the proportion of variability in vote margin as explained by tweet share.
wherever you are, make sure you are communicating with me when you have questions!↩︎
wherever you are, make sure you are communicating with me when you have questions!↩︎
- pushing the file(s)
- poor assignment operator
- invalid object name
- unmatched quotes
- no mistake
- improper syntax for a function argument
- I mean, the right answer has to be Yes, right!??!
no right answer here!↩︎
- In the local folder which also has the R project. It could be on the Desktop or the Home directory, but it must be in the same place as the R project. Do not upload files to the remote GitHub directory or you will find yourself with two different copies of the files.
Yes! All the responses are reasons to make a figure.↩︎
- Because that graphic displays the message you want as optimally as possible.
- color must be specified outside the
aes()
function
- color must be specified outside the
- dot color is specified as “navy”, line color is specified as
wday
.
- dot color is specified as “navy”, line color is specified as
- set the information outside the
aes()
function
- set the information outside the
answers may vary. I’d say c. putting the work in context. Others might say b. facilitating comparison or d. simplifying the story. However, I don’t think a correct answer is a. making the data stand out.↩︎
- making the data stand out
- One showed the relevant comparison better.
- It isn’t at the origin. in combination with d. There wasn’t a label explaining why the axes were where they were. The story associated with the average value axes is not clear to the reader.
- babynames in wrong place
- Table c is best because the columns allow us to work with each of the variable separately.
- does something different because it takes the
mean()
(average) instead of thesum()
. The other commands compute the total number of births broken down byyear
andsex
.
- does something different because it takes the
filter()
(year, name)
sum(num)
running the different code chunks with relevant output.↩︎
-country
year
gdpval
(if possible, good idea to name variables something different from the name of the data frame)
- use
pivot_longer()
on raw data. The reference to the study is: Gregory Belenky, Nancy J. Wesensten, David R. Thorne, Maria L. Thomas, Helen C. Sing, Daniel P. Redmond, Michael B. Russo and Thomas J. Balkin (2003) Patterns of performance degradation and restoration during sleep restriction and subsequent recovery: a sleep dose-response study. Journal of Sleep Research 12, 1–12.
- use
- Mick
- none of them (the default is to retain all the variables)
NA
(it would beNULL
in SQL)
map_chr(c(1,4,7), addTen)
because the output is in quotes, the values are strings, not numbers.
- all of the above. The
map()
function allows vectors, lists, and data frames as input.
- all of the above. The
map(c(1, 4, 7), ~addTen)
. The~
acts on functions that do not have their own name or that are defined byfunction(...)
. By adding the argument(.x)
we’ve expanded theaddTen()
function, and so it needs a~
. TheaddTen()
function all alone does not use a~
.
- 6 random normals (1 with mean 1, sd 3; 2 with mean 3, sd 1; 3 with mean 47, sd 10)
- question, yes, no
- “1”, “cat”, “5”, NA, “cat” (Note that the numbers were converted to character strings!)
- makes your results reproducible
- 0.05 If the null hypothesis is true and the technical conditions hold, then we should reject the null hypothesis \(\alpha \cdot 100\)% of the time.
- No one knows. It totally depends on how and how much the technical conditions are violated and how resistant the test is to the technical conditions.
- No one knows. It totally depends on the degree to which the null hypothesis is false.
- 0.95 If the technical conditions hold, 95% of all confidence intervals should contain the true parameter.
- No one knows. If the technical conditions do not hold, the CI may or may not contain the true value of the parameter at the given confidence level (i.e., 95%).
- the Central Limit Theorem doesn’t apply for medians.
we always need d. random sampling / random allocation for appropriate conclusions. The theory is derived from b. normal data. If c. \(n \geq 30\), then the theory holds really well, regardless of whether the data are normal.↩︎
- random sampling / random allocation for appropriate conclusions
- FALSE
- So we can find rejection region
- Control type I error
- We give him a raise when he doesn’t deserve it.
- We don’t give him a raise when he deserves it.
- We give him a raise when he deserves it.
- type I error too high
- A type II error
- A type I error
- Increases your power
- Increases your power
- Increases your power
- can be done on statistics with unknown sampling distributions
- can be done on statistics with unknown sampling distributions
c(1, 2, 4, 3, 4, 10)
because there is no 3 in the original dataset.
- 50
- 1000
- sample statistic
- (0.12, 0.179)
- yes (because the interval for the true difference in population trimmed means does not overlap zero.)
- The proportion that do not contain zero.
- FALSE
- TRUE
- TRUE
- FALSE, it uses Minkowski(p) distance, with a user specified choice of
p
. Whenp=2
, Minkowski is the same as Euclidean.
- FALSE, it uses Minkowski(p) distance, with a user specified choice of
k
neighbors
V
partitions
- Require the assumptions of statistical models
- FALSE. If you don’t prune, you will overfit.
- Pruning is only beneficial when purity improvement is statistically significant (we don’t do hypothesis testing on trees)
- the explanatory (predictor) variables
- high variance
- the observations
- are in the training data and provide independent predictions
- some of the above (both of b. and c. are great!) The oob data frame is exactly the same size as the training data, and it may or may not be bigger than the test data.
- m = # predictor variables
- 20 predictions from all trees
- using cross validation (but is also often used as p/3 or sqrt(p))
- allows for a good model that does not overfit the data.
- FALSE
- TRUE
- FALSE (what is “best” ????)
- yes
- has no tuning parameters
- TRUE
- Big
- Small
- Overfitting
- overfit the training data more as compared to a small C (because the act of misclassifying is heavily penalized)
- calculate more variables (use feature engineering to see if you can get more information out of the variables)
- and iv. The model is too simple (i.e., biased), so we need more information to make it more complex. If we make the model more complex it will have lower bias but higher variance.
- Increasing the complexity will overfit the data (increase variance) Even though you already perfectly fit the data, the model could potentially draw boundaries that were even more singular, thus increasing the variance and producing a worse model.
- Effectively. The polynomial bound becomes more wiggly as the degree of the polynomial increases.
- The trade-off between misclassification and simplicity of the model (In the SVM derivation, we think of it as the trade-off between the misclassifications and the width of the street. But that width determines how complicated the model is.)
- increasing C (to discourage misclassifications) and/or increase gamma (to encourage more complicated models)
- decreasing C (to allow more misclassifications) and/or decrease gamma (to make a simpler model)
- FALSE Nothing in statistics (or life) is guaranteed. Tuning parameters, however does help the model to avoid overfitting as much as possible.
- FALSE The biggest problem with missing data is that missingness is almost always non-random. So the missing values are systematically different than the non-missing data. Makes conclusions difficult.
- Cluster analysis is a technique for analyzing data when the response variable is categorical and the predictor variables are continuous in nature.
- dendrogram
- Hierarchical clustering
- Divisive clustering
- average linkage
- TRUE (k-means starts with a random set of centers)
- Partitioning clustering
- TRUE
- depends on why you are clustering the data
- all of the above
- From the user’s usage patterns on a website, identify different user groups. Or maybe b. No pre-defined response variable.
- 1 & 3