It is usually a bad idea to measure programmer’s efficiency in numbers. There is a lot of useless metrics and each one brings negative (even disruptive) side effects. Once I got a piece of code that had been written by someone who literally was been paid for each line of code. That was a module of about 9000 lines of JavaScript spaghetti. The author did not use neither loops nor arrays. I squeezed the module to about 150 LOC. I also heard about a company, where programmers get bonuses for the lowest number of introduced bugs at the end of month. Their “top” engineers just do not write code. And there are dozens of such “success” stories over the Internet. If manager is dumbass, nothing will help from.
Nevertheless, sometimes it makes sense to get some numbers. It can be used as a signal that something went wrong. Maybe your teammate got family problems or burnout. Anyway, if a programmer usually produces X units of work, but have done only X/2 in the last measured period, it is time to figure out what happened.
I am not a manager, and I use metrics to estimate my own productivity. I often work totally without supervision and my collaboration with customers is only based on trust: “Do what you want to, but get it done.” So I need some metrics for self-control. And I discovered an optimal one (at least it works for me).
Meet the number of unit tests (NOUT). Unit test usually has constant complexity. It is a key feature. And it is obvious, that NOUT grows proportionally with program complexity. So you can measure productivity in NOUT/week or total progress in just NOUT. You can even try to estimate features in NOUT. It seems, it will be more accurate rather than story points. And if you start to pay your programmers for NOUT, they will just improve test coverage (it is a bad advise, don’t do that, they will find out how to fuck you up).
In my opinion, it is still far from perfect. But if it is used with wisdom, it could give you some advantage. If it isn’t, well... see the first paragraph.