P versus NP has long complicated backstory, see Stephen Cook's official statement of the problem for the Clay Millenium Problems. https://www.claymath.org/millennium-problems/p-vs-np-problem
We have also recently been reviewing Craig Alan Feinstein's papers on the problem, especially his article https://arxiv.org/abs/1605.08639.
Now must record that Feinstein's work is somewhat controversial, although we find it quite interesting. For example here is a critique of one of Feinstein's earlier papers, notice the year 2007, https://arxiv.org/abs/0706.2035. Let us remark that controversy is very important for science, and we strongly believe in the need for creative controversy. So we follow the debate with much interest.
[Insert description of Subset-Sum Problem over an ordered set of length N]
[Insert Feinstein's argument to reduce the complexity to $O(2^{N/2})$]
Here Feinstein claims (and his critics argue that he asserts without proof) that $O(2^{N/2})$ is a lower bound for the worst-case complexity of the SUBSET-SUM problem. The problem however is that computer science and complexity ``theorists" have no strategies for proving lower bounds!
So it appears to us that if $P \neq NP$ (as is widely expected), then it's formally unprovable (because of the unprovability of lower bounds). In other words, as Feinstein argues, the critics can always say "Well what if one day an algorithm is discovered by some new method that is much faster $\ldots$." And what's the answer to such a "what if?" statement?
If we review S. Cook's statement of the P vs NP, we see very few references to lower bounds, and it's indeed the central problem in the area.
There is another comment to make regarding the effect of so-called quantum computers and quantum algorithms. With QCs, complexity is measured by the number of quantum gates in the circuit realizing the algorithm. But is the complexity in QC really comparable to complexity in classical algorithms?
Firstly, the quantum algorithms are not deterministic, i.e. they have a nontrivial probability of giving the wrong answer as output. For example, Grover's search algorithm is quite famous (almost as famous as Schor's factorization algorithm), and given a function $f$ on an unstructured set of $N$ elements, finds the input which satisfies a given output in $O(\sqrt{N})$ evaluations. We are being vague here, but the wikipedia article on Lov Grover and his algorithm has useful links:
e.g. https://arxiv.org/abs/quant-ph/0109116 and https://en.wikipedia.org/wiki/Grover's_algorithm .
So we can finish this article here. What's our main point? That IF $P \neq NP$, then there will never be a proof, because lower bounds on complexity are not possible. Critics will always say there is a possibility of a magic algorithm that speeds up the process. And the confusion being imported by quantum computers encourages the magical optimism, "Maybe there will even be a quantum algorithm tomorrow which is much faster than anything known!"
So meanwhile we focus on other problems and would not recommend anybody work on P versus NP.