These discussions are interesting to me because I see this completely opposite to you. Slot WR is a very valuable position in the NFL and we didn't have one at all. Meanwhile we did have a center we had drafted in the second round previously and a lot of people are upset we didn't take a second one instead. If you want to criticize with the benefit of hindsight then it's more reasonable IMO to say Eskridge was the result of reaching for need rather than taking a higher rated prospect at a position we already had.
In reality Eskridge was probably a fine pick who just hasn't panned out; much like the countless other draft picks who fail to pan out every year. The much more interesting question is what the Seahawks do going forwards with the information we have now. I doubt JSN would be there at 20, but if so would they try again or do people in the building still think that Eskridge could develop further?
That's an interesting point, and one that hadn't occurred to me, even though there are other cases where I think the process was good, even if the specific outcome wasn't.
To me, the clearest example of a Seahawks pick that didn't work out, but that I think was a good pick because the process behind it was good, was when the Seahawks picked Penny with Nick Chubb still available. This is one the anti-Carroll, anti-Schneider crowd loves to point out as malpractice, but I consider it a completely correct choice.
The Seahawks stated explicitly that the reason they chose Penny over Chubb was because Penny had a much-cleaner injury history in college. Chubb's entire left knee had exploded in college, with dislocation, cartilage damage, and tears of the ACL, MCL, and PCL. Penny didn't have any major injuries in college.
The best tool humanity has for dealing with uncertainty is probability theory. The best practices in, say, choosing players to draft can only improve the
probability of getting a productive player and reduce the
probability of getting an injury-prone player.
One specific outcome (
e.g., Penny having major injury issues through his entire NFL career, while Chubb has had fewer) does not invalidate the process. That's not how you measure the quality of probabilistic projections (well, not how you
should, anyway). The way to measure these things requires making actual probabilistic predictions, like weather reports, which tell you the estimated probability of, for example, rain tomorrow. I'd be willing to bet NFL teams' analytics departments make such predictions (
e.g., their best guesses at the probabilities of a given player suffering a major injury in his first season, first two seasons, first three seasons, first four seasons) and keep records of them. Then, after a number of that kind of predictions have been made, break them down into ranges of probabilities. The kind of question to ask when evaluating probabilistic predictions is something along the lines of, say, "when we said a major injury was between 30% and 40% likely to happen in a given time period, how often did it happen?" If it turns out the outcome (major injury) happened much less or much more than 30-40% of the times you said that outcome had a chance of occurring that was between 30% and 40%, that suggests there may be problems in the part of the process in which probabilities are estimated.
This is why it drives me nuts when mediots say things about Nate Silver's election forecasts like "he got all 50 states right in the 2008 presidential election" or "he got 49 of 50 states right in 2012." Even if Silver had said, for example, that McCain had a 2% chance of winning Vermont, and then McCain lost Vermont (or, say, that McCain had a 99% chance of winning Kentucky, and then McCain won Kentucky), that doesn't make it a "correct" projection. To Silver's credit, he knows the right way to evaluate probabilistic predictions, and I'm sure fivethirtyeight evaluates the quality of its projections in the correct relative-frequency-better-be-close-to-our-projected-probability way.