Originally posted by kh1234567890 That is exactly the problem - it is not just mathematically sharpening, if at all, it is making up features that it 'thinks' ought to be there.
I've played with it a bit and my conclusion was that the defaults it choses are far too aggressive. Past about 5-10% sharpening I start to see textures and objects which are obviously made up.
I wouldn't be happy with either the "before" or "after" results there... but in fairness, the "before" seems like a pretty extreme example.
I suspect all of these AI programs work best when there's already reasonable definition and something essentially recognisable in an image. The less there is to work with, the fuzzier the AI match and the more you're at the behest of machine learning to "fill in" content rather than enhance what's already there. Still, this is a good example of what initially put me off one of Topaz' other products, Gigapixel AI. I can live with some minor enhancement artefacts, but when content is actually replaced with something the software thinks might look better, that's a step too far for me.
That said, I can understand why folks find this technology exciting, and I can see why it has commercial applications - especially where the typical non-pixel-peeping customer is concerned...