Is it possible to determine from only a photo of a cityscape whether it was taken close with wide angle or from a distance with zoom?

How to draw pentagram-like shape in Latex?

Lock out of Oracle based on Windows username

How to laser-level close to a surface

Should all adjustments be random effects in a mixed linear effect?

Does the talk count as invited if my PI invited me?

Combining two Lorentz boosts

Taylor series leads to two different functions - why?

Was Tyrion always a poor strategist?

How can I monitor the bulk API limit?

Why didn't Daenerys' advisers suggest assassinating Cersei?

In Dutch history two people are referred to as "William III"; are there any more cases where this happens?

Have the writers and actors of GOT responded to its poor reception?

How does this piece of code determine array size without using sizeof( )?

Would a "ring language" be possible?

Failing students when it might cause them economic ruin

What should I wear to go and sign an employment contract?

Cycling to work - 30mile return

Can 2 light bulbs of 120V in series be used on 230V AC?

Shortest amud or daf in Shas?

When did Britain learn about the American Declaration of Independence?

Why is Drogon so much better in battle than Rhaegal and Viserion?

Why is choosing a suitable thermodynamic potential important?

Can more than one instance of Bend Luck be applied to the same roll by multiple Wild Magic sorcerers?

Is it a good idea to teach algorithm courses using pseudocode?



Is it possible to determine from only a photo of a cityscape whether it was taken close with wide angle or from a distance with zoom?







.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








1















I look after a project digitizing and providing access to vintage photos in our city. Something we've always struggled with is determining from where a photo was taken. We have data on building locations dimensions and are looking at creating a tool to determine this automatically based on tagged buildings in an image. I'm confident we will be able to find a line in 3D space on which the camera must have been. However, I don't know if it's possible to determine a point on that line.



Is this even possible?










share|improve this question







New contributor



pr3sidentspence is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.



















  • Closely related: How can I figure out precisely where someone was standing to take this old cityscape photograph?

    – scottbb
    2 hours ago











  • Suggestion: ask a surveyor. What you're asking about, and what the answers talk about, are familiar to the methods used in surveying.

    – scottbb
    2 hours ago











  • Not sure if it's exactly what you want, but there a software called fSpy that can calculate all the camera/lens parameters if you select vanishing lines on it -> fspy.io/basics

    – Vitor
    3 mins ago

















1















I look after a project digitizing and providing access to vintage photos in our city. Something we've always struggled with is determining from where a photo was taken. We have data on building locations dimensions and are looking at creating a tool to determine this automatically based on tagged buildings in an image. I'm confident we will be able to find a line in 3D space on which the camera must have been. However, I don't know if it's possible to determine a point on that line.



Is this even possible?










share|improve this question







New contributor



pr3sidentspence is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.



















  • Closely related: How can I figure out precisely where someone was standing to take this old cityscape photograph?

    – scottbb
    2 hours ago











  • Suggestion: ask a surveyor. What you're asking about, and what the answers talk about, are familiar to the methods used in surveying.

    – scottbb
    2 hours ago











  • Not sure if it's exactly what you want, but there a software called fSpy that can calculate all the camera/lens parameters if you select vanishing lines on it -> fspy.io/basics

    – Vitor
    3 mins ago













1












1








1








I look after a project digitizing and providing access to vintage photos in our city. Something we've always struggled with is determining from where a photo was taken. We have data on building locations dimensions and are looking at creating a tool to determine this automatically based on tagged buildings in an image. I'm confident we will be able to find a line in 3D space on which the camera must have been. However, I don't know if it's possible to determine a point on that line.



Is this even possible?










share|improve this question







New contributor



pr3sidentspence is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











I look after a project digitizing and providing access to vintage photos in our city. Something we've always struggled with is determining from where a photo was taken. We have data on building locations dimensions and are looking at creating a tool to determine this automatically based on tagged buildings in an image. I'm confident we will be able to find a line in 3D space on which the camera must have been. However, I don't know if it's possible to determine a point on that line.



Is this even possible?







3d forensics






share|improve this question







New contributor



pr3sidentspence is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.










share|improve this question







New contributor



pr3sidentspence is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








share|improve this question




share|improve this question






New contributor



pr3sidentspence is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








asked 4 hours ago









pr3sidentspencepr3sidentspence

62




62




New contributor



pr3sidentspence is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




New contributor




pr3sidentspence is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.














  • Closely related: How can I figure out precisely where someone was standing to take this old cityscape photograph?

    – scottbb
    2 hours ago











  • Suggestion: ask a surveyor. What you're asking about, and what the answers talk about, are familiar to the methods used in surveying.

    – scottbb
    2 hours ago











  • Not sure if it's exactly what you want, but there a software called fSpy that can calculate all the camera/lens parameters if you select vanishing lines on it -> fspy.io/basics

    – Vitor
    3 mins ago

















  • Closely related: How can I figure out precisely where someone was standing to take this old cityscape photograph?

    – scottbb
    2 hours ago











  • Suggestion: ask a surveyor. What you're asking about, and what the answers talk about, are familiar to the methods used in surveying.

    – scottbb
    2 hours ago











  • Not sure if it's exactly what you want, but there a software called fSpy that can calculate all the camera/lens parameters if you select vanishing lines on it -> fspy.io/basics

    – Vitor
    3 mins ago
















Closely related: How can I figure out precisely where someone was standing to take this old cityscape photograph?

– scottbb
2 hours ago





Closely related: How can I figure out precisely where someone was standing to take this old cityscape photograph?

– scottbb
2 hours ago













Suggestion: ask a surveyor. What you're asking about, and what the answers talk about, are familiar to the methods used in surveying.

– scottbb
2 hours ago





Suggestion: ask a surveyor. What you're asking about, and what the answers talk about, are familiar to the methods used in surveying.

– scottbb
2 hours ago













Not sure if it's exactly what you want, but there a software called fSpy that can calculate all the camera/lens parameters if you select vanishing lines on it -> fspy.io/basics

– Vitor
3 mins ago





Not sure if it's exactly what you want, but there a software called fSpy that can calculate all the camera/lens parameters if you select vanishing lines on it -> fspy.io/basics

– Vitor
3 mins ago










3 Answers
3






active

oldest

votes


















2














I am not knowledgeable on the math and programming needed for this, but I can provide you with some insights into information needed for something such.



What you want to look into, is perspective distortion in photography. In particular, you need to research lens compression (which is a bogus term, but that aside).



A quick overview of perspective distortion and compression:

In photography, the focal length of the lens used determines how wide or narrow the shot is. A small focal length means wide angle, and vice versa.

The choice of focal length does not only affect distortion, as beautifully portrayed by the GIF on the wiki page, but also determines compression.



Say, you are at 10 metres distance from Mike. You take a picture of Mike with a 50mm lens, and then move back to 40 metres and take another picture with, this time, a 200mm lens. Mike's beautiful face occupies the same space on the frame, but this time, the background seems to be way closer to him and Mike's face is 'flatter' than in the previous picture. This is what is generally referred to as compression.



Looking at how compressed the images are could possibly help you in determining the distance. This is already tricky though and would probably require a fair amount of guesswork. It does get trickier, however.



If you don't possess the original files or negatives/positives, it's not unlikely the photo you see is cropped.

Cropping further makes it difficult, because cropping affects compression, too. A picture taken with a 50mm lens cropped to half the size would look the same (from the perspective of compression) as when the photo was taken with a 100mm lens instead, from the same distance.



See the picture?






share|improve this answer


















  • 1





    @pr3sidentspence here's a great visual of Tim's example with "Mike" - petapixel.com/2016/07/28/camera-adds-10-pounds

    – Hueco
    4 hours ago


















1














Yes, it is possible. I am sure there will be an approach to doing this in a general way using the geometry of perspective, but here is an example of doing it in a toy example case to show it can be done. Imagine you have something like this:



diagram of positions of things



Here the camera is at ground level at the left, and there are two points at an equal height, h, separated by s, with the camera being a distance D from the nearest one, and everything is in a line (let's ignore the problem that the nearest object is going to obscure the more distant one: in real life things will not be in a line like this and the maths will be harder).



You know h and s as you know what is where in the city, and you want to work out D from looking at the photograph: can you? Yes, here's how.



Let's imagine that the photograph made by the camera is projected onto a screen a distance d in front of the camera, in such a way that the images of the two points line up with their real positions as seen from the camera. By simple geometry the heights on the image of the images of the two points are



  • h1 = hd/D (nearest one)

  • h2 = hd/(D + s) (further one)

So now, the things we know are s, h, (from the geometry of the city), h1 and h2 (from measuring the photograph), and we don't know d (and don't care in fact), and we don't know D but do care.



So now we can do a bit of algebra:



h1/h2 = hd(D + s)/(hdD) = (D + s)/D



So



D = s/(h1/h2 - 1)



So two things have happened here, one will always happen and one is because I chose a toy example.



  • d has vanished, and all that matters is the ratio of h1 and h2: this should be obvious because, obviously, we can enlarge the photo to any size we like so the only thing that can matter is the ratios of the positions on the image.

  • h has vanished. This only happens because I chose both of the points to be at the same height: it won't happen in general.

Finally you can convince yourself that this expression for D is right: if h1 and h2 are the same, then D becomes infinite, and that's right, because you will only see the two towers at the same height if you are infinitely far from them. Similarly if h1/h2 becomes very large then D becomes very small, and this is right: if you're very close to one tower it will appear very big on the image.



Now, as I said, this is a toy example: in real life nothing will be lined up, everything will be different heights &c &c. But if you have enough points on the image for which you know where the real points are then you can tell where the image was made from (interesting question: how many points do you need? I suspect in general it will be 3, although it might be 4: I am sure this is known however).



I am sure that there are books on the maths of perspective, and these will have general solutions you can use: I'd recommend doing some searches on that.




Notes:



  • I've assumed there is no distortion introduced by the optical system of the camera – in real life there will be some but for most lenses it should be small enough (don't try this with fisheyes or very wide angle lenses though);

  • I have not thought about what camera movements (common for old LF images) might do – or rather I have thought & I haven't drawn a definite conclusion although I don't think they will matter.





share|improve this answer
































    1














    You can often use techniques from piloting. You can find alignments in the picture, for instance the corner of a building that masks off the 2nd vertical line of windows of the building in the back. From this you determine a line on which the camera must have been. With two more such alignments you get three lines that give you a triangle in which the camera must have been, and its size gives you a degree of accuracy. Foreground elements can then give you a very precise position. GoogleEarth is your friend.



    Somehow unrelated to photography, I was hunting for new housing a couple years ago and the advertisements never give the complete address, but practicing that technique using the views from windows and balcony in the advertisement pictures I was usually able to spot the building.






    share|improve this answer























      Your Answer








      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "61"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader:
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      ,
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );






      pr3sidentspence is a new contributor. Be nice, and check out our Code of Conduct.









      draft saved

      draft discarded


















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fphoto.stackexchange.com%2fquestions%2f108313%2fis-it-possible-to-determine-from-only-a-photo-of-a-cityscape-whether-it-was-take%23new-answer', 'question_page');

      );

      Post as a guest















      Required, but never shown

























      3 Answers
      3






      active

      oldest

      votes








      3 Answers
      3






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      2














      I am not knowledgeable on the math and programming needed for this, but I can provide you with some insights into information needed for something such.



      What you want to look into, is perspective distortion in photography. In particular, you need to research lens compression (which is a bogus term, but that aside).



      A quick overview of perspective distortion and compression:

      In photography, the focal length of the lens used determines how wide or narrow the shot is. A small focal length means wide angle, and vice versa.

      The choice of focal length does not only affect distortion, as beautifully portrayed by the GIF on the wiki page, but also determines compression.



      Say, you are at 10 metres distance from Mike. You take a picture of Mike with a 50mm lens, and then move back to 40 metres and take another picture with, this time, a 200mm lens. Mike's beautiful face occupies the same space on the frame, but this time, the background seems to be way closer to him and Mike's face is 'flatter' than in the previous picture. This is what is generally referred to as compression.



      Looking at how compressed the images are could possibly help you in determining the distance. This is already tricky though and would probably require a fair amount of guesswork. It does get trickier, however.



      If you don't possess the original files or negatives/positives, it's not unlikely the photo you see is cropped.

      Cropping further makes it difficult, because cropping affects compression, too. A picture taken with a 50mm lens cropped to half the size would look the same (from the perspective of compression) as when the photo was taken with a 100mm lens instead, from the same distance.



      See the picture?






      share|improve this answer


















      • 1





        @pr3sidentspence here's a great visual of Tim's example with "Mike" - petapixel.com/2016/07/28/camera-adds-10-pounds

        – Hueco
        4 hours ago















      2














      I am not knowledgeable on the math and programming needed for this, but I can provide you with some insights into information needed for something such.



      What you want to look into, is perspective distortion in photography. In particular, you need to research lens compression (which is a bogus term, but that aside).



      A quick overview of perspective distortion and compression:

      In photography, the focal length of the lens used determines how wide or narrow the shot is. A small focal length means wide angle, and vice versa.

      The choice of focal length does not only affect distortion, as beautifully portrayed by the GIF on the wiki page, but also determines compression.



      Say, you are at 10 metres distance from Mike. You take a picture of Mike with a 50mm lens, and then move back to 40 metres and take another picture with, this time, a 200mm lens. Mike's beautiful face occupies the same space on the frame, but this time, the background seems to be way closer to him and Mike's face is 'flatter' than in the previous picture. This is what is generally referred to as compression.



      Looking at how compressed the images are could possibly help you in determining the distance. This is already tricky though and would probably require a fair amount of guesswork. It does get trickier, however.



      If you don't possess the original files or negatives/positives, it's not unlikely the photo you see is cropped.

      Cropping further makes it difficult, because cropping affects compression, too. A picture taken with a 50mm lens cropped to half the size would look the same (from the perspective of compression) as when the photo was taken with a 100mm lens instead, from the same distance.



      See the picture?






      share|improve this answer


















      • 1





        @pr3sidentspence here's a great visual of Tim's example with "Mike" - petapixel.com/2016/07/28/camera-adds-10-pounds

        – Hueco
        4 hours ago













      2












      2








      2







      I am not knowledgeable on the math and programming needed for this, but I can provide you with some insights into information needed for something such.



      What you want to look into, is perspective distortion in photography. In particular, you need to research lens compression (which is a bogus term, but that aside).



      A quick overview of perspective distortion and compression:

      In photography, the focal length of the lens used determines how wide or narrow the shot is. A small focal length means wide angle, and vice versa.

      The choice of focal length does not only affect distortion, as beautifully portrayed by the GIF on the wiki page, but also determines compression.



      Say, you are at 10 metres distance from Mike. You take a picture of Mike with a 50mm lens, and then move back to 40 metres and take another picture with, this time, a 200mm lens. Mike's beautiful face occupies the same space on the frame, but this time, the background seems to be way closer to him and Mike's face is 'flatter' than in the previous picture. This is what is generally referred to as compression.



      Looking at how compressed the images are could possibly help you in determining the distance. This is already tricky though and would probably require a fair amount of guesswork. It does get trickier, however.



      If you don't possess the original files or negatives/positives, it's not unlikely the photo you see is cropped.

      Cropping further makes it difficult, because cropping affects compression, too. A picture taken with a 50mm lens cropped to half the size would look the same (from the perspective of compression) as when the photo was taken with a 100mm lens instead, from the same distance.



      See the picture?






      share|improve this answer













      I am not knowledgeable on the math and programming needed for this, but I can provide you with some insights into information needed for something such.



      What you want to look into, is perspective distortion in photography. In particular, you need to research lens compression (which is a bogus term, but that aside).



      A quick overview of perspective distortion and compression:

      In photography, the focal length of the lens used determines how wide or narrow the shot is. A small focal length means wide angle, and vice versa.

      The choice of focal length does not only affect distortion, as beautifully portrayed by the GIF on the wiki page, but also determines compression.



      Say, you are at 10 metres distance from Mike. You take a picture of Mike with a 50mm lens, and then move back to 40 metres and take another picture with, this time, a 200mm lens. Mike's beautiful face occupies the same space on the frame, but this time, the background seems to be way closer to him and Mike's face is 'flatter' than in the previous picture. This is what is generally referred to as compression.



      Looking at how compressed the images are could possibly help you in determining the distance. This is already tricky though and would probably require a fair amount of guesswork. It does get trickier, however.



      If you don't possess the original files or negatives/positives, it's not unlikely the photo you see is cropped.

      Cropping further makes it difficult, because cropping affects compression, too. A picture taken with a 50mm lens cropped to half the size would look the same (from the perspective of compression) as when the photo was taken with a 100mm lens instead, from the same distance.



      See the picture?







      share|improve this answer












      share|improve this answer



      share|improve this answer










      answered 4 hours ago









      Tim StackTim Stack

      557114




      557114







      • 1





        @pr3sidentspence here's a great visual of Tim's example with "Mike" - petapixel.com/2016/07/28/camera-adds-10-pounds

        – Hueco
        4 hours ago












      • 1





        @pr3sidentspence here's a great visual of Tim's example with "Mike" - petapixel.com/2016/07/28/camera-adds-10-pounds

        – Hueco
        4 hours ago







      1




      1





      @pr3sidentspence here's a great visual of Tim's example with "Mike" - petapixel.com/2016/07/28/camera-adds-10-pounds

      – Hueco
      4 hours ago





      @pr3sidentspence here's a great visual of Tim's example with "Mike" - petapixel.com/2016/07/28/camera-adds-10-pounds

      – Hueco
      4 hours ago













      1














      Yes, it is possible. I am sure there will be an approach to doing this in a general way using the geometry of perspective, but here is an example of doing it in a toy example case to show it can be done. Imagine you have something like this:



      diagram of positions of things



      Here the camera is at ground level at the left, and there are two points at an equal height, h, separated by s, with the camera being a distance D from the nearest one, and everything is in a line (let's ignore the problem that the nearest object is going to obscure the more distant one: in real life things will not be in a line like this and the maths will be harder).



      You know h and s as you know what is where in the city, and you want to work out D from looking at the photograph: can you? Yes, here's how.



      Let's imagine that the photograph made by the camera is projected onto a screen a distance d in front of the camera, in such a way that the images of the two points line up with their real positions as seen from the camera. By simple geometry the heights on the image of the images of the two points are



      • h1 = hd/D (nearest one)

      • h2 = hd/(D + s) (further one)

      So now, the things we know are s, h, (from the geometry of the city), h1 and h2 (from measuring the photograph), and we don't know d (and don't care in fact), and we don't know D but do care.



      So now we can do a bit of algebra:



      h1/h2 = hd(D + s)/(hdD) = (D + s)/D



      So



      D = s/(h1/h2 - 1)



      So two things have happened here, one will always happen and one is because I chose a toy example.



      • d has vanished, and all that matters is the ratio of h1 and h2: this should be obvious because, obviously, we can enlarge the photo to any size we like so the only thing that can matter is the ratios of the positions on the image.

      • h has vanished. This only happens because I chose both of the points to be at the same height: it won't happen in general.

      Finally you can convince yourself that this expression for D is right: if h1 and h2 are the same, then D becomes infinite, and that's right, because you will only see the two towers at the same height if you are infinitely far from them. Similarly if h1/h2 becomes very large then D becomes very small, and this is right: if you're very close to one tower it will appear very big on the image.



      Now, as I said, this is a toy example: in real life nothing will be lined up, everything will be different heights &c &c. But if you have enough points on the image for which you know where the real points are then you can tell where the image was made from (interesting question: how many points do you need? I suspect in general it will be 3, although it might be 4: I am sure this is known however).



      I am sure that there are books on the maths of perspective, and these will have general solutions you can use: I'd recommend doing some searches on that.




      Notes:



      • I've assumed there is no distortion introduced by the optical system of the camera – in real life there will be some but for most lenses it should be small enough (don't try this with fisheyes or very wide angle lenses though);

      • I have not thought about what camera movements (common for old LF images) might do – or rather I have thought & I haven't drawn a definite conclusion although I don't think they will matter.





      share|improve this answer





























        1














        Yes, it is possible. I am sure there will be an approach to doing this in a general way using the geometry of perspective, but here is an example of doing it in a toy example case to show it can be done. Imagine you have something like this:



        diagram of positions of things



        Here the camera is at ground level at the left, and there are two points at an equal height, h, separated by s, with the camera being a distance D from the nearest one, and everything is in a line (let's ignore the problem that the nearest object is going to obscure the more distant one: in real life things will not be in a line like this and the maths will be harder).



        You know h and s as you know what is where in the city, and you want to work out D from looking at the photograph: can you? Yes, here's how.



        Let's imagine that the photograph made by the camera is projected onto a screen a distance d in front of the camera, in such a way that the images of the two points line up with their real positions as seen from the camera. By simple geometry the heights on the image of the images of the two points are



        • h1 = hd/D (nearest one)

        • h2 = hd/(D + s) (further one)

        So now, the things we know are s, h, (from the geometry of the city), h1 and h2 (from measuring the photograph), and we don't know d (and don't care in fact), and we don't know D but do care.



        So now we can do a bit of algebra:



        h1/h2 = hd(D + s)/(hdD) = (D + s)/D



        So



        D = s/(h1/h2 - 1)



        So two things have happened here, one will always happen and one is because I chose a toy example.



        • d has vanished, and all that matters is the ratio of h1 and h2: this should be obvious because, obviously, we can enlarge the photo to any size we like so the only thing that can matter is the ratios of the positions on the image.

        • h has vanished. This only happens because I chose both of the points to be at the same height: it won't happen in general.

        Finally you can convince yourself that this expression for D is right: if h1 and h2 are the same, then D becomes infinite, and that's right, because you will only see the two towers at the same height if you are infinitely far from them. Similarly if h1/h2 becomes very large then D becomes very small, and this is right: if you're very close to one tower it will appear very big on the image.



        Now, as I said, this is a toy example: in real life nothing will be lined up, everything will be different heights &c &c. But if you have enough points on the image for which you know where the real points are then you can tell where the image was made from (interesting question: how many points do you need? I suspect in general it will be 3, although it might be 4: I am sure this is known however).



        I am sure that there are books on the maths of perspective, and these will have general solutions you can use: I'd recommend doing some searches on that.




        Notes:



        • I've assumed there is no distortion introduced by the optical system of the camera – in real life there will be some but for most lenses it should be small enough (don't try this with fisheyes or very wide angle lenses though);

        • I have not thought about what camera movements (common for old LF images) might do – or rather I have thought & I haven't drawn a definite conclusion although I don't think they will matter.





        share|improve this answer



























          1












          1








          1







          Yes, it is possible. I am sure there will be an approach to doing this in a general way using the geometry of perspective, but here is an example of doing it in a toy example case to show it can be done. Imagine you have something like this:



          diagram of positions of things



          Here the camera is at ground level at the left, and there are two points at an equal height, h, separated by s, with the camera being a distance D from the nearest one, and everything is in a line (let's ignore the problem that the nearest object is going to obscure the more distant one: in real life things will not be in a line like this and the maths will be harder).



          You know h and s as you know what is where in the city, and you want to work out D from looking at the photograph: can you? Yes, here's how.



          Let's imagine that the photograph made by the camera is projected onto a screen a distance d in front of the camera, in such a way that the images of the two points line up with their real positions as seen from the camera. By simple geometry the heights on the image of the images of the two points are



          • h1 = hd/D (nearest one)

          • h2 = hd/(D + s) (further one)

          So now, the things we know are s, h, (from the geometry of the city), h1 and h2 (from measuring the photograph), and we don't know d (and don't care in fact), and we don't know D but do care.



          So now we can do a bit of algebra:



          h1/h2 = hd(D + s)/(hdD) = (D + s)/D



          So



          D = s/(h1/h2 - 1)



          So two things have happened here, one will always happen and one is because I chose a toy example.



          • d has vanished, and all that matters is the ratio of h1 and h2: this should be obvious because, obviously, we can enlarge the photo to any size we like so the only thing that can matter is the ratios of the positions on the image.

          • h has vanished. This only happens because I chose both of the points to be at the same height: it won't happen in general.

          Finally you can convince yourself that this expression for D is right: if h1 and h2 are the same, then D becomes infinite, and that's right, because you will only see the two towers at the same height if you are infinitely far from them. Similarly if h1/h2 becomes very large then D becomes very small, and this is right: if you're very close to one tower it will appear very big on the image.



          Now, as I said, this is a toy example: in real life nothing will be lined up, everything will be different heights &c &c. But if you have enough points on the image for which you know where the real points are then you can tell where the image was made from (interesting question: how many points do you need? I suspect in general it will be 3, although it might be 4: I am sure this is known however).



          I am sure that there are books on the maths of perspective, and these will have general solutions you can use: I'd recommend doing some searches on that.




          Notes:



          • I've assumed there is no distortion introduced by the optical system of the camera – in real life there will be some but for most lenses it should be small enough (don't try this with fisheyes or very wide angle lenses though);

          • I have not thought about what camera movements (common for old LF images) might do – or rather I have thought & I haven't drawn a definite conclusion although I don't think they will matter.





          share|improve this answer















          Yes, it is possible. I am sure there will be an approach to doing this in a general way using the geometry of perspective, but here is an example of doing it in a toy example case to show it can be done. Imagine you have something like this:



          diagram of positions of things



          Here the camera is at ground level at the left, and there are two points at an equal height, h, separated by s, with the camera being a distance D from the nearest one, and everything is in a line (let's ignore the problem that the nearest object is going to obscure the more distant one: in real life things will not be in a line like this and the maths will be harder).



          You know h and s as you know what is where in the city, and you want to work out D from looking at the photograph: can you? Yes, here's how.



          Let's imagine that the photograph made by the camera is projected onto a screen a distance d in front of the camera, in such a way that the images of the two points line up with their real positions as seen from the camera. By simple geometry the heights on the image of the images of the two points are



          • h1 = hd/D (nearest one)

          • h2 = hd/(D + s) (further one)

          So now, the things we know are s, h, (from the geometry of the city), h1 and h2 (from measuring the photograph), and we don't know d (and don't care in fact), and we don't know D but do care.



          So now we can do a bit of algebra:



          h1/h2 = hd(D + s)/(hdD) = (D + s)/D



          So



          D = s/(h1/h2 - 1)



          So two things have happened here, one will always happen and one is because I chose a toy example.



          • d has vanished, and all that matters is the ratio of h1 and h2: this should be obvious because, obviously, we can enlarge the photo to any size we like so the only thing that can matter is the ratios of the positions on the image.

          • h has vanished. This only happens because I chose both of the points to be at the same height: it won't happen in general.

          Finally you can convince yourself that this expression for D is right: if h1 and h2 are the same, then D becomes infinite, and that's right, because you will only see the two towers at the same height if you are infinitely far from them. Similarly if h1/h2 becomes very large then D becomes very small, and this is right: if you're very close to one tower it will appear very big on the image.



          Now, as I said, this is a toy example: in real life nothing will be lined up, everything will be different heights &c &c. But if you have enough points on the image for which you know where the real points are then you can tell where the image was made from (interesting question: how many points do you need? I suspect in general it will be 3, although it might be 4: I am sure this is known however).



          I am sure that there are books on the maths of perspective, and these will have general solutions you can use: I'd recommend doing some searches on that.




          Notes:



          • I've assumed there is no distortion introduced by the optical system of the camera – in real life there will be some but for most lenses it should be small enough (don't try this with fisheyes or very wide angle lenses though);

          • I have not thought about what camera movements (common for old LF images) might do – or rather I have thought & I haven't drawn a definite conclusion although I don't think they will matter.






          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited 3 hours ago

























          answered 3 hours ago









          tfbtfb

          68618




          68618





















              1














              You can often use techniques from piloting. You can find alignments in the picture, for instance the corner of a building that masks off the 2nd vertical line of windows of the building in the back. From this you determine a line on which the camera must have been. With two more such alignments you get three lines that give you a triangle in which the camera must have been, and its size gives you a degree of accuracy. Foreground elements can then give you a very precise position. GoogleEarth is your friend.



              Somehow unrelated to photography, I was hunting for new housing a couple years ago and the advertisements never give the complete address, but practicing that technique using the views from windows and balcony in the advertisement pictures I was usually able to spot the building.






              share|improve this answer



























                1














                You can often use techniques from piloting. You can find alignments in the picture, for instance the corner of a building that masks off the 2nd vertical line of windows of the building in the back. From this you determine a line on which the camera must have been. With two more such alignments you get three lines that give you a triangle in which the camera must have been, and its size gives you a degree of accuracy. Foreground elements can then give you a very precise position. GoogleEarth is your friend.



                Somehow unrelated to photography, I was hunting for new housing a couple years ago and the advertisements never give the complete address, but practicing that technique using the views from windows and balcony in the advertisement pictures I was usually able to spot the building.






                share|improve this answer

























                  1












                  1








                  1







                  You can often use techniques from piloting. You can find alignments in the picture, for instance the corner of a building that masks off the 2nd vertical line of windows of the building in the back. From this you determine a line on which the camera must have been. With two more such alignments you get three lines that give you a triangle in which the camera must have been, and its size gives you a degree of accuracy. Foreground elements can then give you a very precise position. GoogleEarth is your friend.



                  Somehow unrelated to photography, I was hunting for new housing a couple years ago and the advertisements never give the complete address, but practicing that technique using the views from windows and balcony in the advertisement pictures I was usually able to spot the building.






                  share|improve this answer













                  You can often use techniques from piloting. You can find alignments in the picture, for instance the corner of a building that masks off the 2nd vertical line of windows of the building in the back. From this you determine a line on which the camera must have been. With two more such alignments you get three lines that give you a triangle in which the camera must have been, and its size gives you a degree of accuracy. Foreground elements can then give you a very precise position. GoogleEarth is your friend.



                  Somehow unrelated to photography, I was hunting for new housing a couple years ago and the advertisements never give the complete address, but practicing that technique using the views from windows and balcony in the advertisement pictures I was usually able to spot the building.







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered 2 hours ago









                  xenoidxenoid

                  4,8571822




                  4,8571822




















                      pr3sidentspence is a new contributor. Be nice, and check out our Code of Conduct.









                      draft saved

                      draft discarded


















                      pr3sidentspence is a new contributor. Be nice, and check out our Code of Conduct.












                      pr3sidentspence is a new contributor. Be nice, and check out our Code of Conduct.











                      pr3sidentspence is a new contributor. Be nice, and check out our Code of Conduct.














                      Thanks for contributing an answer to Photography Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid


                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.

                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fphoto.stackexchange.com%2fquestions%2f108313%2fis-it-possible-to-determine-from-only-a-photo-of-a-cityscape-whether-it-was-take%23new-answer', 'question_page');

                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      Log på Navigationsmenu

                      Wonderful Copenhagen (sang) Eksterne henvisninger | NavigationsmenurSide på frankloesser.comWonderful Copenhagen

                      Detroit Tigers Spis treści Historia | Skład zespołu | Sukcesy | Członkowie Baseball Hall of Fame | Zastrzeżone numery | Przypisy | Menu nawigacyjneEncyclopedia of Detroit - Detroit TigersTigers Stadium, Detroit, MITigers Timeline 1900sDetroit Tigers Team History & EncyclopediaTigers Timeline 1910s1935 World Series1945 World Series1945 World Series1984 World SeriesComerica Park, Detroit, MI2006 World Series2012 World SeriesDetroit Tigers 40-Man RosterDetroit Tigers Coaching StaffTigers Hall of FamersTigers Retired Numberse