My opinion is that for how most people use cameras a phone camera is better than a DSLR, however there are hard physical limits and the silly numbers you see from cellphones are beyond those. Diffraction is a killer giving false magnification and with their "telephoto" camera/lens is even worse, the noise on those little sensors with tiny pixels gets bad fast. Computational photography is interesting and can provide some good benefits in some cases but isn't a panacea that some make it out to be.
For example looking at the iPhone 11 (chosen because I could find the details I need) which has a pixel pitch of 1.4um but has a maximum aperture of f/1.8. However even at f/1.4 the airy disk from diffraction is 1.9um but at f/2 is 2.7um so that would give an airy disk size of about 2.4um. So there is substantial overlap. For those who are unfamiliar with what this means is that if you had a point source of light what is the area on the sensor that it would illuminate. if the area is less than the pixel size it means that diffraction is not limiting the image quality. However if the area illuminated from that point source of light is greater than the pixel size it means it is providing false magnification and the resolving power is limited by the diffraction. Also at this level diffraction blurs fine detail away. The iPhone 12 is stated as having a bigger sensor and an f/1.6 lens so diffraction will be less of a problem but it will still be there unless they have a 12mp sensor where the pixel pitch is 2.1um resulting in double the area. I did find one source showing a pixel pitch of 1.7um on the iPhone 12 but that is still under what the lens would be theoretically able to resolve.
The diffraction limit calculation also assumes that you are dealing with an ideal lens which those little cell camera lenses are not. Not having an ideal lens means that your image quality only goes down.
This ignores chromatic aberrations that are corrected in software which I would assume are a problem unless they are using some exotic glass. Software can hide most of these problems but the loss of fine detail is something that can't be recreated. The aggressive noise reduction and sharpening are what is used to make up for these but I find a lot of the images on the crunchy side of things where it just looks off, especially since it lacks fine detail.
With each sensor upgrade things like noise performance become better which is a good thing and is applicable to any sensor size. However I have seen some of those max ISO shots (ISO 3072) from the 11 and it was about as bad as the K-3 when you get it up to the 25600 to 51200 range.
The night mode in the 11 left me unimpressed even though it was raved about. My stance is that until a cellphone can take a single astro shot (not the multishot stack they do now) that is
somewhat comparable to this single shot they have nothing to offer for astrophotography. Yes that is a single shot that I manually processed but to get that image there has to be a lot of good data there. Also I could probably reprocess it and get better results now.