Something that has been puzzling me for a while. When bloggers about theoretical computer science (TCS) talk about the status of the P vs. NP question, they always feel compelled to say something along the lines of "...but in practice thanks to the enormous power of modern SAT solvers (or ILP solvers, CP solvers etc) many NP-hard problems can be solved in reasonable time".
On one hand I understand why they write this; it is to emphasize that NP-hardness is by no means the end of the world. On the other hand, anybody who has used these tools will know that it is still fairly easy to build (structured) instances that are not too large and completely cripple them.
Is this lack of nuance in the discussion strategic (to really emphasize that NP-hardness is, in many real-world cases, only asymptotically a problem), or is it because the TCS community, in contrast to say the Operations Research community, knows about these tools but rarely makes use of them in practice and thus only has a limited understanding of their shortcomings?
Any thoughts?
=> More informations about this toot | View the thread | More toots from skelk@mathstodon.xyz
text/gemini
This content has been proxied by September (3851b).