Dewarping algorithm for video images

The place to discuss development topics.
Discussions on the RICOH THETA API and SDK, image processing of spherical images, other hardware related topics, introduction of useful libraries, etc.

Dewarping algorithm for video images

Postby mrisney » 06 Mar 2015, 09:24

Using ffmpeg I am able to extract a sequence of files from the .mov file
that is generated on the device.

Code: Select all
ffmpeg -i "ricohtheta.mov"  -an -f image2 "output_%05d.png"


This generates a .png image like this

http://i.imgur.com/5Bbrm9M.png


I would like to develop a mobile (IOS/Android) program that edits these images and then encodes them back to an .mp4
file, without having to use the desktop DualfishBlender application.

It seems DualfishBlender applies some warp transformation and stitching of both spherical 180 degree images to create a collection of equirectangular images, that make up the .mov file

Would it be possible to get these algorithms or have an API to do so ?
Also what are the image sizes of the frames in the .mov file, are these lossless images ? Are they cropped in some manner ? is extracting an image file from a .mov - pre-conversion the best way to get the images from a video file on the Ricoh Theta M15 ?, Is there a way to get a RAW video file ?
mrisney
 
Posts: 2
Joined: 14 Nov 2014, 16:12

Re: Dewarping algorithm for video images

Postby MobileCamera » 02 May 2015, 05:26

I would be interested in the unwrapping algorithm as well.
MobileCamera
 
Posts: 7
Joined: 01 May 2015, 07:07

Re: Dewarping algorithm for video images

Postby mbirth » 03 May 2015, 17:49

If you look at the DualfishBlender.exe with a hex editor, you'll notice these scripts:

Code: Select all
varying vec2 v_texcoord;
varying float v_pos_y;
uniform sampler2D texture;
uniform mat3 tilt;

const float M_PI = 3.14159265358979;
const float M_PI2 = M_PI / 2.0;
const float M_2PI = M_PI*2.0;
void main() {
    float theta0 = M_PI2 - M_PI * v_texcoord.y;
    float phi0 = M_2PI * v_texcoord.x;
    float cosTheta = cos(theta0);
    vec3 p = tilt * vec3(cosTheta * cos(phi0), sin(theta0), cosTheta * sin(phi0));
    if (p.y > 1.0) p.y = 1.0;
    if (p.y < -1.0) p.y = -1.0;
    float theta = asin(p.y);
    float phi = atan(p.z, p.x);
    vec2 q = vec2(mod(phi / M_2PI, 1.0), 0.5 - theta / M_PI);
    gl_FragColor = texture2D(texture, q);
}

varying vec2 v_texcoord;
varying float v_pos_y;
uniform sampler2D texture;
void main() {
    vec4 src = texture2D(texture, v_texcoord) - vec4(0.0625, 0.5, 0.5, 0.0);
    vec4 rgb;
    rgb.b = 1.164383*src.r + 1.596027*src.b;
    rgb.g = 1.164383*src.r - 0.391762*src.g - 0.812968*src.b;
    rgb.r = 1.164383*src.r + 2.017232*src.g;
    rgb.a = 1.0;
    gl_FragColor = rgb;
}

varying vec2 v_texcoord;
varying float v_pos_y;
uniform sampler2D texture;

void main() {
    gl_FragColor = texture2D(texture, v_texcoord);
}

varying vec2 v_texcoord;
varying float v_pos_y;
uniform sampler2D textureA;
uniform sampler2D textureB;
uniform sampler2D stitchTable;

void main() {
    vec2 offset = (texture2D(stitchTable, v_texcoord).rg - vec2(0.5, 0.5))/24.0;
    vec4 colA = texture2D(textureA, v_texcoord);
    vec4 colB = texture2D(textureB, v_texcoord + offset);
    colA.a = 1.0;
    colB.a = 1.0;
    float alpha = 0.5-(v_pos_y/0.02);
    if (alpha<=0.0) { alpha=0.0; } else if (alpha>=1.0) { alpha=1.0; }
    alpha = 1.0 - alpha;
    gl_FragColor = colA*(1.0-alpha) + colB*alpha;
}

attribute vec4 position;
attribute vec2 texcoord;
varying vec2 v_texcoord;
varying float v_pos_y;

void main() {
    gl_Position = position;
    v_texcoord = texcoord;
    v_pos_y = position.y;
}


They look like GPU instructions.
mbirth
 
Posts: 122
Joined: 30 Apr 2015, 13:53

Re: Dewarping algorithm for video images

Postby mistapottaOHS » 07 May 2015, 15:11

mbirth wrote:They look like GPU instructions.


They are OpenGL instructions, although I'm not sure how to implement them.

I'm writing a Python script that breaks the .MOV to individual frames, splits, rotates, and de-fisheyes each frame, then recombines them to a x.264 video, then runs the YouTube .py script to enable the spherical video paramaters on the video as a whole.

The de-Fisheye method is the one i'm having the hardest time with, primarily because I don't have the calibration constants for the camera in video mode. If anyone has these constants, I'll be happy to share my Python script so we can convert on whatever hardware we choose. Similarly, if someone can help me port mbirth's scripts to PyOpenGL commands, I'll implement them there as well or instead.
mistapottaOHS
 
Posts: 1
Joined: 28 Apr 2015, 02:53

Re: Dewarping algorithm for video images

Postby mbirth » 08 May 2015, 10:13

mistapottaOHS wrote:The de-Fisheye method is the one i'm having the hardest time with, primarily because I don't have the calibration constants for the camera in video mode.


Now that you're saying this, I could try to play around a bit more with nona. This can also be used to combine two 180° images to one 360°, I think.

EDIT: Looking somewhat usable so far:

Code: Select all
convert frame.png -crop 960x960+0+0 +repage frame_l.png
convert frame.png -crop 960x960+960+0 +repage frame_r.png


And then using this nona script:
Code: Select all
i w960 h960 f2 v197 r90 p0 y0 n"frame_l.png"
i w960 h960 f2 v197 r90 p0 y180 n"frame_r.png"
p w1920 h960 f2 v360 r0 p0 y0 n"JPEG q99"


Results in this: http://i.imgur.com/X3vgaO6.jpg
mbirth
 
Posts: 122
Joined: 30 Apr 2015, 13:53

Re: Dewarping algorithm for video images

Postby mrisney » 14 May 2015, 07:55

Close, very close, but the stitching is a bit off, can I see your code on Github ?

Marc
mrisney
 
Posts: 2
Joined: 14 Nov 2014, 16:12

Re: Dewarping algorithm for video images

Postby mbirth » 17 May 2015, 17:34

I'm afraid it's only these few lines I tinkered with. There's no code for video-dewarping yet. Not even a script. I want to find the correct parameters first before doing batch processing.
mbirth
 
Posts: 122
Joined: 30 Apr 2015, 13:53

Re: Dewarping algorithm for video images

Postby mbirth » 23 May 2015, 23:58

Looking at this video: https://www.youtube.com/watch?v=QUkt4y1idpY it seems there's yaw/pitch/roll information for each frame in the MOV file somewhere. At least that's the only plausible reason for the image tilts in the corners. This metadata would be needed, too, for successful conversion.
mbirth
 
Posts: 122
Joined: 30 Apr 2015, 13:53

Re: Dewarping algorithm for video images

Postby QSimba » 21 Aug 2015, 07:39

mbirth wrote:Looking at this video: https://www.youtube.com/watch?v=QUkt4y1idpY it seems there's yaw/pitch/roll information for each frame in the MOV file somewhere. At least that's the only plausible reason for the image tilts in the corners. This metadata would be needed, too, for successful conversion.


When convert some video file which is not recorded by theta using dfb,There is an error message output like this:
"loadTiltStream(1.MOV) failed, but ignored.
loadAfnTable[A](1.MOV) failed."
So I think the tilt information is stored in the "TiltStream" of the MOV file.
QSimba
 
Posts: 1
Joined: 21 Aug 2015, 07:30

Re: Dewarping algorithm for video images

Postby codetricity » 16 Sep 2015, 02:23

Has anyone made progress with an example script? Thanks for any help.
codetricity
 
Posts: 612
Joined: 31 Jul 2015, 01:56

Re: Dewarping algorithm for video images

Postby Melinka » 13 Feb 2016, 22:03

Hey mistapottaOHS, I'm trying to stitch together a dual fish eye stream to output an equirectangular video stream. You mentioned a Python script? Have you managed to do it? Where are you at? Could you share your script for turning dual fish eye into a x.264 video? It would be really helpful to find out how you did it. Thanks a lot!
Melinka
 
Posts: 1
Joined: 13 Feb 2016, 21:39

Re: Dewarping algorithm for video images

Postby mbirth » 14 Feb 2016, 14:49

About the metadata: I just played around with a few tools to try to get the pitch/yaw/roll data from a MP4 file. The best I've got for now is with exiftool:

Code: Select all
---- Camera ----
Make                            : RICOH
Camera Model Name               : RICOH THETA S
Light Source                    : Unknown
Metering Mode                   : Multi-segment
Max Aperture Value              : 1.0
Focal Length                    : 0.0 mm
Sharpness                       : Normal
Maker Note Type                 : Rdc
Firmware Version                : 1.11
Serial Number                   : (00000000)00102683
Exposure Program                : Movie
White Balance                   : Auto
Accelerometer                   : 359.8 -5.4
Compass                         : 202.5
Ricoh Pitch                     : -5.4
Ricoh Roll                      : -0.199999999999989
Focal Length                    : 0.0 mm


And AtomicParsley:

Code: Select all
     Atom udta @ 212954156 of size: 1142653, ends @ 214096809
         Atom RTHU @ 212954164 of size: 59453, ends @ 213013617                ~
         Atom RMKN @ 213013617 of size: 1150, ends @ 213014767                ~
         Atom RDT1 @ 213014767 of size: 114936, ends @ 213129703                ~
         Atom RDT2 @ 213129703 of size: 114936, ends @ 213244639                ~
         Atom RDT3 @ 213244639 of size: 9704, ends @ 213254343                ~
         Atom RDT4 @ 213254343 of size: 9704, ends @ 213264047                ~
         Atom RDT5 @ 213264047 of size: 76640, ends @ 213340687                ~
         Atom RDT6 @ 213340687 of size: 756016, ends @ 214096703                ~
         Atom RDT7 @ 214096703 of size: 24, ends @ 214096727                ~
         Atom RDT8 @ 214096727 of size: 18, ends @ 214096745                ~
         Atom @mod @ 214096745 of size: 21, ends @ 214096766                ~
         Atom @swr @ 214096766 of size: 30, ends @ 214096796                ~
         Atom @mak @ 214096796 of size: 13, ends @ 214096809                ~


From here, you will find that RTHU is the thumbnail and RMKN is the Ricoh MakerNote. So we can rule those out.

Now I found this python code and you can add the identifiers found with AtomicParsley (RDT1, RDT2, etc.) to get the raw data of those atoms.

And I guess the rotation information is contained in it. Somehow.

EDIT: The video file has 3180 frames. So the size of the atom should be a multiple of that, I think.
mbirth
 
Posts: 122
Joined: 30 Apr 2015, 13:53

Re: Dewarping algorithm for video images

Postby kosso » 29 Feb 2016, 15:30

@mbirth Thanks for this info. I've managed to get about as far as you have with 'de-warping' dual-fisheye to an Equirectangular image using nona and your pto file.

Did you manage to get 'enblend' working at all, to clean up those edges?

Thanks! :)
kosso
 
Posts: 18
Joined: 10 Sep 2015, 12:16

Re: Dewarping algorithm for video images

Postby kfarr » 01 Mar 2016, 21:28

These OpenGL instructions are very similar if not identical to the syntax used on Quartz Composer custom plugins. I've been hacking away at this for a few hours and this is what I've found so far:
https://github.com/kfarr/theta-s-quartz

In theory we could replace the fish2sphere code with the opengl code directly from thetas pasted above from hex editor...
kfarr
 
Posts: 4
Joined: 01 Mar 2016, 20:44

Re: Dewarping algorithm for video images

Postby kosso » 04 Mar 2016, 18:46

This is a great idea. Creating a filter for Camtwist. +1 starred. :D
kosso
 
Posts: 18
Joined: 10 Sep 2015, 12:16

Re: Dewarping algorithm for video images

Postby bboybz » 17 Mar 2016, 11:23

Thank you for your insight mbirth.

I posted here with my findings: https://developers.theta360.com/en/forums/viewtopic.php?f=5&t=187&p=1547#p1547

Was able to remove pitch and roll by removing RDT5 (with unknown side effects) but good enough for me if it loads on youtube.
bboybz
 
Posts: 3
Joined: 09 Mar 2016, 03:50

Re: Dewarping algorithm for video images

Postby ahmedmayman » 19 Oct 2017, 19:04

you can try this one for dewarping

http://www.kscottz.com/fish-eye-lens-dewarping-and-panorama-stiching/
ahmedmayman
 
Posts: 3
Joined: 09 Aug 2017, 13:54

Re: Dewarping algorithm for video images

Postby codetricity » 09 Nov 2017, 21:06

Blog post on dewarping software.
http://theta360.guide/blog/video/2017/01/25/convert-dual-fisheye-to-equirectangular.html

Relevant list. Original post has live links.

  • oF-thetaEquirectangular by Yasuhiro Hoshino
  • defisheye by @dinhnhat0401 - for GPUImage on iOS devices. Uses Hoshino-san’s shader program
  • theta-s-quartz by Kieran Farr
  • thetaview.js in the video streaming sample app repository. JavaScript sample code to stimulate ideas for THETA contest.
  • Unity shader packs by goroman and stereoarts
  • THETA-S-LiveViewer-P5 by Kougaku. In Processing language.
codetricity
 
Posts: 612
Joined: 31 Jul 2015, 01:56

Re: Dewarping algorithm for video images

Postby CruzCruzrodriguez » 13 Mar 2018, 10:50

Has anybody complete development with an instance writing? Thanks for any assist.

Sources:
University Assignment Services
CruzCruzrodriguez
 
Posts: 1
Joined: 13 Mar 2018, 10:48

Re: Dewarping algorithm for video images

Postby mbirth » 22 Mar 2018, 21:35

I just found this post:

http://paulbourke.net/dome/dualfish2sphere/

I don't know if this tool will work as expected, but from the description it sounds very promising. However, to have the ground at the bottom, you'd also need the orientation info per frame.
mbirth
 
Posts: 122
Joined: 30 Apr 2015, 13:53

Re: Dewarping algorithm for video images

Postby codetricity » 29 Mar 2018, 17:18

mbirth, thanks for sharing this article. I shared this article along with some additional thoughts here.

How could the algorithm for a single image be applied to a video? Does the developer need to process each frame individually in a loop?

In order to do a clean dual-fish to equirectangular conversion, the developer will need the lens parameter information, which Ricoh hasn't released. However, maybe good enough is okay.

As the source code is not available, I'm going to send the author, Paul Bourke, a note about the possibility of using the code for online education.
codetricity
 
Posts: 612
Joined: 31 Jul 2015, 01:56

Re: Dewarping algorithm for video images

Postby jackiee_jacky » 09 Oct 2018, 10:03

I would be interested in the unwrapping algorithm as well.
code:
varying vec2 v_texcoord;
varying float v_pos_y;
uniform sampler2D texture;
uniform mat3 tilt;

const float M_PI = 3.14159265358979;
const float M_PI2 = M_PI / 2.0;
const float M_2PI = M_PI*2.0;
void main() {
float theta0 = M_PI2 - M_PI * v_texcoord.y;
float phi0 = M_2PI * v_texcoord.x;
float cosTheta = cos(theta0);
vec3 p = tilt * vec3(cosTheta * cos(phi0), sin(theta0), cosTheta * sin(phi0));
if (p.y > 1.0) p.y = 1.0;
if (p.y < -1.0) p.y = -1.0;
float theta = asin(p.y);
float phi = atan(p.z, p.x);
vec2 q = vec2(mod(phi / M_2PI, 1.0), 0.5 - theta / M_PI);
gl_FragColor = texture2D(texture, q);
}

varying vec2 v_texcoord;
varying float v_pos_y;
uniform sampler2D texture;
void main() {
vec4 src = texture2D(texture, v_texcoord) - vec4(0.0625, 0.5, 0.5, 0.0);
vec4 rgb;
rgb.b = 1.164383*src.r + 1.596027*src.b;
rgb.g = 1.164383*src.r - 0.391762*src.g - 0.812968*src.b;
rgb.r = 1.164383*src.r + 2.017232*src.g;
rgb.a = 1.0;
gl_FragColor = rgb;
}

varying vec2 v_texcoord;
varying float v_pos_y;
uniform sampler2D texture;

void main() {
gl_FragColor = texture2D(texture, v_texcoord);
}

varying vec2 v_texcoord;
varying float v_pos_y;
uniform sampler2D textureA;
uniform sampler2D textureB;
uniform sampler2D stitchTable;

void main() {
vec2 offset = (texture2D(stitchTable, v_texcoord).rg - vec2(0.5, 0.5))/24.0;
vec4 colA = texture2D(textureA, v_texcoord);
vec4 colB = texture2D(textureB, v_texcoord + offset);
colA.a = 1.0;
colB.a = 1.0;
float alpha = 0.5-(v_pos_y/0.02);
if (alpha<=0.0) { alpha=0.0; } else if (alpha>=1.0) { alpha=1.0; }
alpha = 1.0 - alpha;
gl_FragColor = colA*(1.0-alpha) + colB*alpha;
}

attribute vec4 position;
attribute vec2 texcoord;
varying vec2 v_texcoord;
varying float v_pos_y;

void main() {
gl_Position = position;
v_texcoord = texcoord;
v_pos_y = position.y;
}
No1 Punjabi Newspaper 2019
Latest Punjabi Newspaper 2019
Last edited by jackiee_jacky on 01 May 2019, 06:23, edited 2 times in total.
jackiee_jacky
 
Posts: 16
Joined: 27 Sep 2018, 13:15

Re: Dewarping algorithm for video images

Postby KamyChaudary » 09 Feb 2019, 09:13

Hello Sir
I would be interested in the unwrapping algorithm as well.

Thanks
KamyChaudary
 
Posts: 1
Joined: 06 Feb 2019, 07:21

Re: Dewarping algorithm for video images

Postby StellahZai » 23 Mar 2019, 03:20

this is the best for theta360 use and sophistication
[*] https://youtu.be/XHOeYCtpirY

------
scan doctor
StellahZai
 
Posts: 1
Joined: 23 Mar 2019, 02:15

Re: Dewarping algorithm for video images

Postby neilpatels » 26 Mar 2019, 09:49

A simple algorithm for correcting lens distortion
One of the new features in the development branch of my open-source photo editor is a simple tool for correcting lens distortion. I thought I’d share the algorithm I use, in case others find it useful. (There are very few useful examples of lens correction on the Internet – most articles simply refer to existing software packages, rather than explaining how the software works.)

Lens distortion is a complex beast, and a lot of approaches have been developed to deal with it. Some professional software packages address the problem by providing a comprehensive list of cameras and lenses – then the user just picks their equipment from the list, and the software applies a correction algorithm using a table of hard-coded values. This approach requires way more resources than a small developer like myself could handle, so I chose a simpler solution: a universal algorithm that allows the user to apply their own correction, with two tunable parameters for controlling the strength of the correction....
office.com/setup
neilpatels
 
Posts: 1
Joined: 11 Mar 2019, 09:13

Re: Dewarping algorithm for video images

Postby SaraMalik » 07 Apr 2019, 14:08

They are OpenGL instructions, although I'm not sure how to implement them.

I'm writing a Python script that breaks the .MOV to individual frames, splits, rotates, and de-fisheyes each frame, then recombines them to a x.264 video, then runs the YouTube .py script to enable the spherical video paramaters on the video as a whole.

The de-Fisheye method is the one i'm having the hardest time with, primarily because I don't have the calibration constants for the camera in video mode. If anyone has these constants, I'll be happy to share my Python script so we can convert on whatever hardware we choose. Similarly, if someone can help me port mbirth's scripts to PyOpenGL commands, I'll implement them there as well or instead.

seo tools online and Best word Rephraser
SaraMalik
 
Posts: 3
Joined: 07 Apr 2019, 13:59

Re: Dewarping algorithm for video images

Postby Richard74574920 » 17 Jul 2019, 07:10

HP provides a printer that is equipped with high-quality printing scanning and faxing available in the pocket-friendly price. 123 hp officejet 5255 is ideal for home and office purpose. You can download the software driver from https://l-123hp.com/. For install the required driver goes to 123.hp.com/setup 5255 and clicks the download button. You can download easily with this site.
Richard74574920
 
Posts: 4
Joined: 17 Jul 2019, 07:10

Re: Dewarping algorithm for video images

Postby AlizaDecruz » 18 Jul 2019, 05:49

We are a leading SEO company in USA, offering cheap SEO services for different types of businesses. Our SEO services will provide you maximum profits over a short period of time. Our SEO team will handle your SEO challenges smartly and provide you the desired results.
AlizaDecruz
 
Posts: 1
Joined: 18 Jul 2019, 05:46

Office.com/setup

Postby roy120rahul » 18 Jul 2019, 05:55

Office.com/setup - Now Download Microsoft Office Setup by entering 25 Digits Office Product Key and Get Started with Office Setup Installation in easy steps.
office.com/setup
roy120rahul
 
Posts: 2
Joined: 17 May 2019, 06:08

Re: Dewarping algorithm for video images

Postby JohnWake » 22 Jul 2019, 20:08

Awesome post, I used this method to make a picture for my new tv cover store on Amazon here https://www.amazon.com/Garnetics-Outdoor-Cover-55-inch/dp/B076H4JXD8
JohnWake
 
Posts: 2
Joined: 09 Jul 2019, 12:14

Next

Return to Development



All times are UTC