What question (or problem) separates the wheat from the chaff?
In software engineering, I think "unconditional faith in a single technique or technology" is a good sign of overconfidence in one's knowledge or ability. This certainly applied to me when I was younger. Engineers who go all-in on a specific technique will have really inconsistent results: some projects will go amazing because the technique was the best option. Some projects won't go so great because the technique wasn't the best solution.
A good sign of maturity is when they can notice "hey, that didn't go great" and use it to inform their future strategy - "technique X applies when these conditions are true. If one of them are false, I'm not sure what to do, but I should conduct more research (or talk to more people) before starting that project"
I've seen that so many times, but also its opposite too: willingness to drop the stack when a relatively minor problem pops up. Navigating through both of these takes experience and keen observation.
This can be generalized to any sort of black and white thinking. Often characterized by thinking or asking "What's the best..." Often they arrive at such a conclusion through a shallow or second-hand evaluation. I do my research and the best I come up with are 'default' tools which are what I'd use before knowing any specifics which may override them.
agreed. now, I am personally biased for or against certain technologies, but I would still say that (even though I might not like it) something like PHP probably has its perfect use cases...
when I've interviewed people, I made sure to ask questions like "what do you like/dislike about language/technology/technique X" and a lot of people really seem to struggle with that kind of question
> when I've interviewed people, I made sure to ask questions like "what do you like/dislike about language/technology/technique X" and a lot of people really seem to struggle with that kind of question
Wait really? That's shocking to me.
> unconditional faith in a single technique or technology
Or language (but I suspect that you might count language as included in "technique or technology").
As a marketer, asking someone to develop a Go To Market strategy definitely separates the wheat from the chaff.
You see who really thinks of marketing in terms of competitive positioning, differentiation, and how to create different messages and content for different parts of the funnel (and how to measure it all), and who just throws a random bunch of tactics at it.
I agree -- i think in general putting someone a bit on their own, whether they're the first employee of the co. or department, or the only person assigned to a substantial project, is going to make their approach, competencies, and weaknesses clear.
Unfortunately in my experience this often takes more than a couple days to shake out, so it doesn't help in interview / recruiting.
I'll let you know when I no longer feel like the chaff!
It'd probably be similar to beefield's. Present a problem with a clear success condition and see how resourceful they are in solving it. A failure would be solving the wrong problem, making excuses (or "the problem is not solvable"), confidently talking in abstractions that don't concretely pertain to the problem, or failure to ask a question/say "I don't know".
I don't know that there's a technical question or problem I could ask to make a call on whether or not they know their stuff.
Tell me about an opinion you hold on some piece of technology (framework, language, design pattern, etc.), one you have a fair amount of conviction about and hold pretty strongly. Now tell me how you arrived at that opinion and conviction.
Depending on what opinion they pull out, I might tweak that and suggest: "What's the argument against that opinion?"
From what I've seen, a lot of people can rattle off reasons they've heard for why a particular thing is done, but have not deeply considered the opposite side of that opinion and what its benefits may be. Ymmv.
Yeah, that's a good followup. It's akin to having a strong counterargument in an academic paper. What I'm going for with the question is an understanding of how the candidate acquires knowledge. Are they generating it themselves via some process like the scientific method? Are they mostly doing that but maybe taking shortcuts here and there by credulously taking on the opinions of trusted sources? Are they throwing all of that overboard and just embracing opinions/received wisdom that are popular or correlate with higher compensation?
The problem I've run into is that it's a pretty easy question to game, if you know the purpose behind asking it; nobody is going to come out and say, e.g., "I think ORMs are dumb because that's what other folks seem to say, and I see those people having high-status/well-compensated jobs, and I'd like some of that too, plz." So maybe it's not a great "weeder" question. /shrug
This smacks as unreliable to me because I have a "Devils advocate" sort of mindset REGARDLESS of my actual level of expertise. I find the issue with this sort of mindset is that it makes you better at tearing down other peoples opinions than it does at coming up with a brilliant opinion of your own. This is problematic when you're trying to do something innovative in a crowded saturated field.
I have the same attitude, and it's good to try to counteract it a bit, but I have found that this attitude can be a great asset when paired with other people who are a bit more on the optimistic side; as long as both parties are willing to listen to each other.
One of my favorite interview questions is "What is your favorite programming language and what would you change about it if you could?" Any answer along the lines of "I wouldn't really change anything." shows a lack of either experience or willingness to think about your tools.
Or it could be a sign of pragmatism, or having the freedom to choose your tools.
Why wish your tool changed instead of focusing on using it, or something better suited, to produce things?
Because every tool has limitations, and even if you think your tool can't/shouldn't necessarily be changed you still need to be aware of its gotchas to use it effectively.
In physics, especially anything remotely related to quantum physics, whether somebody focuses on measurable quantities or not. If somebody has a new measurement technique, something that gives better precision than before, great. If somebody has a new theoretical model that applies to cases that previous models didn't, great. If somebody has a new theoretical model that gives different results than current models, that is great because new experiments can determine between them.
If somebody has a new model that gives the exact same predictions as a current model, lauds it as being obviously the truth, and dismisses all other models as falsehood, then it can be dismissed. This applies to all various interpretations of quantum physics, since they all yield the same measurable predictions.
Don't know about questions, but there are couple of answers I have learned to be serious red flags:
1. Yes, I understand. (with no further elaboration or clarifying questions) 2. It's okay/good/going well (again, with no further elaboration when asking how something is going/set up etc)
First one typically translates to something between "I have a completely unfounded illusion of understanding" and "I have no clue but don't dare to say it"
Second one translates typically to "I have no clue whatsoever"
>"I have a completely unfounded illusion of understanding" and "I have no clue but don't dare to say it"
Or "I'm tired of this droning conference call/meeting where I can barely understand some people's accents and I'd rather just clarify some points in writing later where I don't have to rely on my memory."
>"I have no clue whatsoever"
Or it really is going fine and I don't want to spend forever going into technical details the person asking isn't even going to understand and get interrogated on anything that doesn't sound like it's going perfectly, when everything is under control on my end.
Yeah I am at meetings often like this because I have trouble paying attention or understanding voice communication - I also always watch films with subtitles, but I think tech wise or problem solving I am decent.
I say both all the time, am trusted to handle projects on my own from start to finish, and those projects tend to go very smoothly.
I do also ask questions without hesitation when needed (to the point where it starts to irritate non-developers), but saying "Yes, I understand" or "it's going well" when I do understand and when it is going well is not a red flag.
It's a shame that saying "I don't know" isn't as culturally acceptable in many orgs. Technology is incredibly vast and entirely designed by humans, more people than you could ever imagine in the enterprise actually have no idea what they are doing.
Whenever I have casual conversations with individuals that have the same skillset as me, outside of the environment where I wear my professional "all knowing" persona - Admitting that you don't understand something sparks incredible conversations.
What's even more incredible, is that every time I've stood up and said "I actually have no idea how X works. It is magic to me." Every single time somebody else in the room that had me completely convinced they had a strong grasp on those concepts will also admit they don't know either.
These answers are also my red flags. But I place more blame on culture than the individual sometimes.
#2 might alternately be symptomatic of not wanting to chat with you on the matter, no?
Of course. I thought the context was quite obviously me being in a position where I am expected, for whatever reason, to make a judgement of my opponent's capabilities/progress/quality of work/whatever. At that point I do not that much care if the opponent is a dunning krueger nutcase, just not interested chatting with me, or as per many of your sibling comments, just thinking they know what they are doing so well they do not need to explain details to me. Outcome is the same. Bayes makes me expect that these people are useless. (Obviously Bayes lets me make exceptions once a certain individual has proved himself)
I disagree. I do this all the time when I know (1) someone is presenting a fact to the limit of their knowledge and are not interested in going deeper, or (2) someone is not truly interested in how things are going, and my details will fall on deaf ears, in which case I prefer to conserve energy.
The scenario is even more complex when working for govt. Your role is often that of a sponge: Absorbing all the information presented and volunteering very little, if any, of your own.
As I get older, I try to keep my sentences short and to the point. Often they are dense, layered with meaning. In my experience, folks often wax poetic and use all sorts if inaccurate analogies when they understand very little about the subject.
> First one typically translates to something between "I have a completely unfounded illusion of understanding" and "I have no clue but don't dare to say it"
You didn't mentions the industry so maybe you're in an environment where you spoon feed everyone, but for me it translates "yes I understand the goal and the vague outline, let me go away and dig into the details, then I can ask good questions".
I've noticed that some people, when they don't know the answer to a question, just pretend the question was about something different and confidently answer that other question instead. I really strongly prefer people who know how to say "I don't know".
You can't know everything. What's important is that you know how to learn.
I wish I was better at reflective listening.
I used to like asking other jazz musicians I met for the first time, and hadn't heard, Are you good? (asked with a smile sometimes) People who actually are good never say 'yes' or talk like they think they are. It's interesting hearing what people say, anyway.
Otherwise, asking people who their favourite players are is a quick test, when suspicious. I've come across a few "jazz musicians" who can't even name a single jazz musician. Also, in chess, asking who someone's favourite players are is a quick test of whether they're any good.
In my software dev experience, it is clear from how they think about blog posts or other articles online.
The people I consider to be great a this will read a variety of content on a topic, think about it, synthesize it with their own experience, go try new things, and decide for themselves what is correct for their own project.
The less skilled will say, "See, this blog post said to do it this way, so that is what we shall do."
In grappling (Brazilian jiu-jitsu), it is very apparent from one sparring round with a person to get a sense for what they know and get a general idea for their skill/belt level. The exact belt is determined by that person's coach by measuring their skill against their personal potential.
An analogy to software would be to pair program with a person. You both get a feel for each other's strengths and weaknesses while working on a particular problem. I feel that coding interviews try to replicate this, but fall short given time constraints.
Has anyone in the world actually delivered effective, large scale software that was primarily pair programmed?
Genuine question. It would be fascinating if there were documented examples.
Note that this is distinct from having a company culture where pair programming is common. I am skeptical that just because you occasionally look over each other’s shoulder and exchange ideas, that the software ends up being fundamentally different from the ground up. It seems unlikely that two people would both decide to rewrite an entire subcomponent of the project from scratch, whereas individuals do that often. And the results turn out better (but only when the person is an effective programmer, otherwise it’s just a mess).
I don’t think you’d learn much from pairing with me. You’d see me sitting, staring at the screen for long periods of time, saying nothing. Or experimenting with pasting various code tweaks into a repl. The final result that makes it into a text editor is the outcome of both processes, and these have long time horizons. Sometimes far longer than someone else could reasonably tolerate.
Guy Steele expresses similar admiration. Currently a research scientist for Sun Microsystems, he remembers Stallman primarily as a "brilliant programmer with the ability to generate large quantities of relatively bug-free code." Although their personalities didn't exactly mesh, Steele and Stallman collaborated long enough for Steele to get a glimpse of Stallman's intense coding style. He recalls a notable episode in the late 1970s when the two programmers banded together to write the editor's "pretty print" feature. Originally conceived by Steele, pretty print was another keystroke-triggerd feature that reformatted Emacs' source code so that it was both more readable and took up less space, further bolstering the program's WYSIWIG qualities. The feature was strategic enough to attract Stallman's active interest, and it wasn't long before Steele wrote that he and Stallman were planning an improved version.
"We sat down one morning," recalls Steele. "I was at the keyboard, and he was at my elbow," says Steele. "He was perfectly willing to let me type, but he was also telling me what to type.
The programming session lasted 10 hours. Throughout that entire time, Steele says, neither he nor Stallman took a break or made any small talk. By the end of the session, they had managed to hack the pretty print source code to just under 100 lines. "My fingers were on the keyboard the whole time," Steele recalls, "but it felt like both of our ideas were flowing onto the screen. He told me what to type, and I typed it."
The length of the session revealed itself when Steele finally left the AI Lab. Standing outside the building at 545 Tech Square, he was surprised to find himself surrounded by nighttime darkness. As a programmer, Steele was used to marathon coding sessions. Still, something about this session was different. Working with Stallman had forced Steele to block out all external stimuli and focus his entire mental energies on the task at hand. Looking back, Steele says he found the Stallman mind-meld both exhilarating and scary at the same time. "My first thought afterward was: it was a great experience, very intense, and that I never wanted to do it again in my life."
Jeff Dean and Sanjay Ghemawat are one such example. They’ve been pair programming since before Google.
I interviewed at pivotal in UK and they seemed to be all about pair programming.
While I largely agree, one similarity I see to software engineering is that a beginner (i.e. a day one white belt vs a fresh grad) can work with a purple belt or a senior-level developer and be amazed at the gap of knowledge, when in reality that person might look at their superior and be wondering the same thing. There are levels to the game, and ultimately you don't know what you don't know.
One of the reasons I like BJJ is because it's hard. There is no substitute for time on the mats, in the same way that you cannot become a good software engineer by simply graduating from a top-tier university or by reading every book on the subject.
I don't want to take anything away from your comment, because it's absolutely right. It's just a nice lesson to consider that you can perceive something, and it not being the case.
The point about feedback is the root of my problem with DK as an idea.
In most professions, you have a ton of feedback indicating where you are, often formalized. In martial arts, you wear a belt showing roughly what level you're at, which you earned through tests done by people far senior to you. In the military, you wear your rank, maybe skill badges, and do constant training. In most other jobs, you have a title and get feedback from supervisors and such.
The hypothesis of DK seems to require that the person is evaluating their performance entirely on their own, possibly because the authors were evaluating comedic skill, which is a very subjective task and something everyone knows at least a little about. Moreover, it's one where you get bad feedback: good comedy among friends is not the same as comedy that pleases a crowd.
But in the professional world, you practice a craft towards a result and are exposed to your peers, the information on how you stack up is pretty objective and coming to you fairly consistently.
And I think the cases people attribute to DK are better explained by other mechanisms. In particular, there's a lot of confusion between self-knowledge and knowledge of the complexity of a task.
I have done phone screens and asked people stuff like, "how many numbers can be represented with 8 bits." But that's to screen out people who were applying to a job without having a clue what the job entailed. Not understanding a job description is different from not understanding your level of skill.
And I see plenty of self knowledge in interviews; if I ask about algorithms and data structures, a common answer is "I haven't looked at that since school." You learn: 1. the person doesn't think that skill is relevant to the job and 2. the person is willing to admit they aren't good at it.
I may disagree with the person that the skill is relevant, after all, I asked the question for a reason, but it's not a lack of self-knowledge on the part of the candidate.
yeah in programming you get a feeling of if someone is really uncomfortable pair programming.
A basic maths competence test would consist in trying to find out what the person thinks mathematics actually is. Inexperienced people would mostly think of computations, such as evaluating integrals or computing eigenvalues. People who understand maths know that it chiefly deals with definitions, theorems, proofs, conjectures and problem solving (that last one is sometimes forgotten even by very theoretical mathematicians, but it's key to some areas such as graph theory, combinatorics, etc.).
In data engineering, I think your responses to the 'entity resolution' problem are a good Dunning Kurger style litmus test.
If you don't know, entity resolution is the process of matching unique rows in two or more databases. Are these the same movies? Are these the same person?
Novice DE: Oh easy, just merge on the name.
Intermediate DE: OH GOD NO. <michael_scott_no.jpg>
Expert DE: That's complicated, but I have a plan.
Just curious, is there a standard way to start attacking that problem?
You always need some sort of data normalization scheme, and one that makes sense for the task you're running.
(This including things such as Unicode normalization and looking at other fields to determine if it's the same thing.)
And you get to handle duplicates too.
That is just the start, problem gets even more interesting in a real sharded scenario because eventual consistency is hard.
When people plain-language declare things as "safe" or "unsafe" it's clear they don't have a background in technical risk assessment.
When people call 5G "safe" or "unsafe", it's clear they don't know about radiation effects on human health. The correct answer is "we don't have data".
Edit: The fact that people would rather make fun of me and imagine me as an "non-ionizing radiation tin foil hatter" (I am not, non-ionizing radiation has no negative long-term health effects) I hope is a demonstration of the power of one's own biases and forcing it onto a stranger to fit one's own convenient worldview.
Your point as a whole stands, but I think even saying “we don’t have data” misrepresents the situation. We have strong theoretical reasons, from our understandings of physics and biology, to believe that there will be no harmful effects. An expert would reasonably say to someone, either a layman or another expert, “5G is safe.”
I disagree: saying "we don't have data"  doesn't say anything about the existence or lack of existence of underlying models. Models are refined all the time to account for exceptions and transitions shown in empirical measurements (Mercury's orbit, deflection of light & general relativity, for example).
The tetrahertz range is especially interesting precisely because it is an unknown between known non-ionizing and known ionizing effects. Extrapolating down from ionizing radiation biological effect data and panicking makes as little sense as extrapolating up from non-ionizing radiation biological effect data and making fun of the panic-ers.
I think it's a fantastic opportunity to prove or modify our existing models. But a mistake to assume the models hold true, given the historical intensity of debate around the (discredited) Linear No Threshold Model and subsequent alternative proposed models.
Edit: I would personally say 5G is more likely to be safe (no acute radiation-caused long term health effects) than unsafe. But I'm not one who likes to gamble, especially with the lives of the public.
 DOI: 10.1080/10643389.2011.574206
Precautionary principle notwithstanding, this is framing the question wrong. The proper question to ask is: is it safer than microwave transmission at low doses like used in phones and wifi. Those are already absolutely known to be unsafe. Just extremely mildly. (Likely not even carcinogenic given historical population incidence data.
Current evidence suggests similar impact for similar power high frequency transmitters.)
Policy questions are not somewhere we can afford ignorance and black and white thinking. Precautionary principle is only good if you're not stopping a better and safer solution with it for sake of conservatism.
See, 5G is not just washing everyone with radiation, it comes with widespread beamforming, which is likely to send lower energy through anything obstructing it. Skin effect prevents getting penetrated with the waves at low energy used in communications.
Those are extremely strong arguments and the onus is really in anyone that says it's unsafe to actually show it is, properly and without doubt.
Otherwise it's plain scaremongering to suggest "we don't know" when the consensus is "it's safe". Exactly the same tactic as used by some climate denialists.
Sample study: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4997488/ (Read all the links to see the whole story. And these scientists used a rather high energy level compared to expected. The current analyses suggest the studies - which mostly show effects - inadequate. There's a strong bias towards publishing an evidence of effect when there's none. See: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6765906/ )
I could say the same thing about people who talk about radiation without mentioning what part of the frequency spectrum they are referring to.
Exactly, such people who do know would clearly understand my comments are around the allocations of 5G in the tetrahertz range, where the gap is shrinking, instead of uncharitably mischaracterizing my statements in the worst possible light (5G allocations elsewhere in the spectrum).
If you mean terahertz, 5G is supposed to top out around 85ghz. Also terahertz range is the start of infrared light. If you think that is dangerous, you might not want to sit too close to a campfire.
Well, microwaves are dangerous too, enough of them will cook you.
At higher frequencies, the effects get more and more surface level though.
With enough power infrared lasers will cut through steel but no one is afraid their TV remote is going to cut off their fingers. I'm not sure what point you are trying to make by talking about the effects of something with a million times the power behind it.
Well yeah but have you considered that if you put a mouse in a slowly oscillating catastrophic magnetic field you get a usually statistically insignificant outcome!! Wake up sheeple
Studies have repeatedly shown non-ionizing radiation has no long term health effects.
As material science closes the tetrahertz gap, the longitudinal studies, by their very definition, take time to complete (longitudinal studies = exposing animals or individuals at lower doses for much longer periods of times).
It's natural for an undisciplined mind to assume they're also non-ionizing. But it is a mark of a disciplined scientific mind to be cautious and instead not extrapolate outside the existing dataset.
Even the people saying it harms people don't say its ionizing.
If we take the frequency of 5G and compare it with X-rays (the lowest energy ionizing photons give or take) we're only off by a factor of 600000
Do you think that light bulbs cause cancer? They emit higher frequency radiation than 5G.
They can cause skin cancer if they emit UV. That one is rather obvious, so most light emitters are made to not emit UV light.
That's like saying your TV would cause cancer if it emitted X-rays. I don't know what point a made up scenario contributes. This was a reply to someone saying that 'we don't have the data to know if sub light frequencies of radiation are non ionizing'.
Dunning-Kruger separating mid- and senior-level devs:
JUNIOR DEV: My code is simple and easy to understand.
MID-LEVEL DEV: My code is subtle, clever, innovative, expressive, hyper-optimized, and ingenious.
SENIOR DEV: My code is simple and easy to understand.
Source: John Arundel, https://twitter.com/bitfield/status/1219174978748370945
But take that for what it's worth: https://twitter.com/bitfield/status/1184741088067833856
At one job we actually had to give up on take-home challenges basically because a good, humble junior was indistinguishable from a good, clearly-thinking senior
So funny. This makes sense as the latter would be catering to an audience which includes the former.
We coined a title "señor Dev" for people who came in saying they were seniors but who's contribution was net negative over time.
Fintech: Is the big idea they are pitching a way for friends to split a dinner bill.
Sigh. DK, as commonly cited, is not simply "competence."
It hypothesizes a meta-cognitive bias whereby a person who is incompetent is cognitively incapable of determining their level of competence.
Everyone has biases, and the reason citing DK may come across the wrong way is it tends to suggest that it's just Other People who have those biases.
From this piece arguing that we routinely misinterpret DK:
> I suspect we find this sort of explanation compelling because it appeals to our implicit just-world theories: we’d like to believe that people who obnoxiously proclaim their excellence at X, Y, and Z must really not be so very good at X, Y, and Z at all, and must be (over)compensating for some actual deficiency.
But, generally, if the question of competence comes up, don't reflexively cite DK. It's been around for ages, it's not novel, and it's usually being miscited.
I am not sure I understand. It sounds like DK claim that assessing one's own competence is difficult (perhaps impossible).
I don't see how that conflicts with the title. It is asking for a test that can assess one's rough ability.
It's not contradictory. My point is it's counterproductive to bring DK into the picture, and I'm pushing back against the way it's used in popular culture. (So, absolutely tilting at windmills, but oh well.)
First, it's adding a false veneer of scientism. All it's asking for is a rough test of competence; citing a wholly irrelevant science paper won't make that test grounded in any kind of methodology.
Second, there's this implied moral condemnation of people who are incompetent. Obviously, we're all incompetent in most things, so that's a strange thing to moralize. But the way DK is used in popular culture, there's an added dimension of, "this arrogant idiot thinks he's competent, but my test really took him down a peg."
"Do you think you can write your own blog posts?"
Everyone thinks they can but almost nobody actually will. I'm not a particularly good writer, but I can do what most people can't/won't, actually follow through on writing something month after month after month.
A deep neural network paper that doesn’t provide reproducible dataset to verify the results, e.g. AlphaZero paper. I would rather study Leela chess than believe AlphaZero. Sure the former is “inspired” by the latter, but at least I can verify it!
As a physics student, my test for myself is always to (surprise myself with) deriving a given equation from scratch.
My "long term" goal in terms of scientific and mathematical maturity is to git gud at calculating path integrals in quantum mechanics.
Many years ago I graduated university with a major in mathematics and I used to think I was pretty good at picking up new maths including pretty advanced stuff like calculus of variations and fourier analysis but for some reason I'm having a real hard time getting into quantum mechanics and Hamiltonians and all that. Am I getting old? Just out of practice? It annoys me...
Have you studied Analytical Mechanics first? Sakurai is a very good book for QM.
If you don't get it physically first then the feynman lectures are very good - physics is more than postulates and proof, at very least there's less to memorise if you start from the bottom.
One for hardware engineering might be not considering the PCB itself. At higher voltages your double-sided FR-4 will become a capacitor, and at 100Mhz+ speeds you'll have EMI problems from 90-degree traces. It's easy to think that you can go from a perfboard and wires proof-of-concept to a PCB in a day, but once you hit higher than ~12V or 100Mhz it starts to feel like it's haunted.
Build vs. buy for tech platforms. 100% build or 100% buy both are dangerously answers from a senior tech person.
In software development, what are the most important qualities of an engineer?
Chaff: Understands computer science. Knows how to scale. Mentor of junior engineers.
Wheat: Understands people. Knows when to scale. Mentee of junior engineers.
I thought it was laziness, impatience and hubris.
When someone takes a position on a topic but is unable to answer questions on why that position was taken. Scientists and academics are often guilty of this when they pontificate on a topic outside their domain of expertise.
In programming, when someone talks about lines of code they've written as an accomplishment (more is better). For those in the know, they're a liability. Solving the problem is the accomplishment.
More generally, the right answer to almost any question in software is: It depends. (That is, it depends on the usually large set of explicit and implicit, technical and non-technical requirements.)
"What can you tell me about the halting problem and its consequences".
You're in a server room walking along the racks, when all of a sudden you look down. You see a server. You flip the server off, Leon. The server is off, and it can't turn back on. Not without your help. But you're not helping. Why is that, Leon?
How do you avoid cross threading? Can you grab me the 50 mil allen wrench?
What is the best ORM?
Not a great question for the stated purpose.
Some people will dig in to the various tradeoffs involved with an ORM, and pick apart the structure of an ORM. And that should result in an "it depends" kind of answer.
Others are more interested in how a team is going to use an ORM; does it help junior devs write "not awful" code, how does it help me structure my application logic.
Others are stats nerds who will tell you which ones do best on benchmarks, are most popular, etc.
And many people will tell you a story about some ORMs they used, and why they liked them or not.
These answers can be useful for getting a sense of how a person thinks, but they won't tell you if the person is competent.
The competent person will say all ORMs are garbage.
Simplistic "No True Scotsman" bullshit. ORMs provide type safety, ease of refactoring, and reduce a lot of boilerplate. Of course, occasionally you need to bypass an ORM to write complex queries, which is fine. ORMs are not perfect, but are a very useful tool nonetheless.
Ironically a competent person would probably not say "All X's are garbage" because, as with most things in software, it's a) complicated and b) never that clear cut.
A query builder with no ORM functionality.
Alternately, a resultset-to-object-property and argument mapper with no query building or templating functionality or higher aspirations.
Just store the JSON in a K/V store
Related to the concept of a Dunning-Kruger "test": Shibboleths.
"Where have you seen the Dunning-Kruger effect in action in your work?"
If they start saying something negative about someone without self-reflection, they are exhibiting the Dunning-Kruger effect.
The nature of Dunning Kruger implies that people who are not qualified to answer this question will try answer it.
I have grown really frustrated with software hiring. Candidate selection is incredibly biased. I really get the impression that software is rife with Dunning-Kruger type people who absolutely cannot write simple code to solve tiny problems and that they selectively bias candidate selection to pick people for selfish reasons that compliment their own position on a team.
Here is how I would screen developers for a general developer position. I would issue a 1 hour limit and ensure the candidate knows the test is graded by a computer only. The goal is can they read instructions and write simple code. I don't care how they write the code. I only care that they can.
Any question could be as complex or challenging in requirements as necessary, but the answer would always be a small function of few parts. The idea being to test reading comprehension, the ability to follow instructions, and write a simple function. An example of output format and data type would be explicitly stated with each question.
An example question:
A customer is spending cash to purchase a drink. Write a function that receives cash as the first argument and the cost of the drink as the second argument and outputs an object indicating the change in coins with preference to the largest denominations first.
1. Specify the grading criteria of the test. 10 points for each question correctly answered within the given time period. There is no penalty for answering a question incorrectly. The cumulative total execution time of all answered questions will be multiplied by 100 if in milliseconds or by 100,000 if in nanoseconds and be deducted from the final score. The idea is to solve as many questions as possible, but in the event that there are a limited number of positions and tied scores slow code is the tie-breaker that disqualifies a candidate.
2. Stress that code style is irrelevant. The test will be graded by a computer compiling the code and executing it again several scenarios in an answer bank. A human will not review the answers submitted.
3. Tell candidates that once the 1 hour test period has exhausted they will have an optional 15 extra minutes to review all prior attempted questions.
4. Force the candidate to perform 3 practice questions before the test timer starts to familiarize the candidate the expectations of the test platform and the appropriateness of answers.
5. Randomly pull questions from a pool of 200+ questions 1 at a time to ensure a developer is focused on the question presented instead of selectively gaming the question list.
6. Ask the candidate to write a function in a designated space that addresses the problem question/statement.
7. Allow the user to test their answer to review the output before submitting it for the next question.
8. Also allow the user to skip to the next random question with a 2 minute time penalty.
That is for a developer. For an architect I would have them write an essay on a prompt provided by the business. Architects review business needs, distill requirements, and communicate goals to provide a platform that executes the business needs while minimizing complexity as much as possible. If they can do that in writing they can do it in software. The most important skill is their ability to communicate clearly against competing factors. Have 3 separate non-developers read and grade the essay.