Mert Özer
mert-o.bsky.social
Mert Özer
@mert-o.bsky.social
Reposted by Mert Özer
Had a great experience presenting our work on 3D scene reconstruction from a single image with @visionbernie.bsky.social at #3DV2025 🇸🇬

andreeadogaru.github.io/Gen3DSR

Reach out if you're interested in discussing our research or exploring international postdoc opportunities @fau.de
March 26, 2025 at 2:27 AM
Reposted by Mert Özer
Happy to share our latest 3D generative breast model: the *implicit* RBSM, or iRBSM for short. As opposed to its PCA-based predecessor, the iRBSM leverages implicit neural representations, yielding a highly detailed and expressive 3D breast model.

Paper: arxiv.org/abs/2412.13244
December 19, 2024 at 11:23 AM
Reposted by Mert Özer
I am very excited to announce our paper “DynaMoN: Motion-Aware Fast and Robust Camera Localization for Dynamic Neural Radiance Fields” has been accepted to #IEEE RA-L.

Paper: ieeexplore.ieee.org/document/107...
Project page: hannahhaensen.github.io/DynaMoN/
Code: github.com/HannahHaense...
December 9, 2024 at 11:39 AM
Reposted by Mert Özer
Das Potenzial der Photovoltaik ist ziemlich hoch. Das gilt vor allem für das Sommerhalbjahr. Aber auch in der Übergangszeit trägt PV sehr viel zur Energieversorgung bei.
Die aktuelle Grafik am Beispiel unseres Hauses
Im November noch 50%.
November 30, 2024 at 9:46 PM
Reposted by Mert Özer
Credits to Takuma Nishimura and Martin Oeggerli (Micronaut) | Find more on: www.micronaut.ch
Micronaut: The fine art of microscopy by science photographer Martin Oeggerli | The fine art of microscopy by science photographer Martin Oeggerli
www.micronaut.ch
November 29, 2024 at 2:46 PM
Reposted by Mert Özer
Scanning Electron Microscopes analyze invisible surfaces. However, they’re only able to take grayscale images. Manual coloring is a cumbersome process and that’s why FAU researchers are using the 3D structure to propagate one colorized view to a whole scene. Impressive! 🎨

Artwork by Micronaut.
November 29, 2024 at 2:44 PM
Reposted by Mert Özer
Wir müssen die Strassen-Laternen auf LED umrüsten. Der jährliche Strombedarf ist höher, als das Leuchtmittel. Eine solche Investition kostet zwar, aber die zukünftigen Haushalte der Kommunen werden unnötig belastet - Nicht LED-Lampen sind unwirtschaftlich.
November 28, 2024 at 12:13 PM
Reposted by Mert Özer
My growing list of #computervision researchers on Bsky.

Missed you? Let me know.

go.bsky.app/M7HGC3Y
November 19, 2024 at 11:00 PM
Reposted by Mert Özer
Solar. Mehr Solar!
November 26, 2024 at 11:37 PM
Reposted by Mert Özer
We handle occlusions by employing amodal completion for each instance. The completed instance is then reconstructed using existing models that perform well for single objects. However, we first address the object crop domain shift (e.g., focal length) through reprojection. (4/5)
November 19, 2024 at 9:52 PM
Reposted by Mert Özer
First, we parse the image of the scene by identifying the composing entities and estimating the depth and camera parameters. Each instance is then processed individually. The unprojected depth serves as a layout reference for composing the scene in 3D space. (3/5)
November 19, 2024 at 9:52 PM
Reposted by Mert Özer
Most single-image scene-level reconstruction methods require 3D supervised end-to-end training and suffer from poor generalization capabilities. We propose a modular approach where each component performs well by focusing on specific tasks that are easier to supervise. (2/5)
November 19, 2024 at 9:52 PM
Reposted by Mert Özer
Excited to share our paper which will be presented at #3DV2025

✨ Gen3DSR: Generalizable 3D Scene Reconstruction via Divide and Conquer from a Single View ✨
🌐 Project page: andreeadogaru.github.io/Gen3DSR
📄 Paper: arxiv.org/abs/2404.03421
👩‍💻 Code: github.com/AndreeaDogar...
(1/5)
November 19, 2024 at 9:52 PM
Reposted by Mert Özer
(3/3) As for colorization, we use color images manually colorized by artist Martin Oeggerli, we project colors onto 3D space with estimated depths and take the colors to create supervision, and also use feature loss employed by Ref-NPR to estimate invisible areas of input colors.
November 13, 2024 at 10:35 PM
Reposted by Mert Özer
(2/3) Our work utilizes Scanning Electron Microscopy (SEM) images of pollen. Two stages: grayscale novel view synthesis and colorization. The grayscale scene is represented by 2DGS, where poses are estimated using perspective projection with exceptionally long focal lengths.
November 13, 2024 at 10:35 PM
Reposted by Mert Özer
Thrilled to share our work: 𝐀𝐫𝐂𝐒𝐄𝐌: Artistic Colorization of SEM Images via Gaussian Splatting
Novel view synthesis of scanning electron microscopy images and Conditional colorization.
📝 arXiv: arxiv.org/abs/2410.21310
🎨Project page: ronly2460.github.io/ArCSEM
(1/3)
November 13, 2024 at 10:34 PM
Reposted by Mert Özer
🚨 Paper Alert 🚨 #CVIU
NeRFtrinsic Four: An End-To-End Trainable NeRF Jointly Optimizing Diverse Intrinsic and Extrinsic Camera Parameters has been accepted to #CVIU!!!
Many thanks to my co-authors! Shout out to Fabian Deuser, @visionbernie.bsky.social,Norbert Oswald and Daniel Roth.
(1/4)
November 13, 2024 at 7:35 AM
How can we learn a multi-modal neural radiance field? What’s the best way to integrate images from a second modality, other than RGB, into NeRF? Check out our new paper!
Project page: mert-o.github.io/ThermalNeRF/
Paper: arxiv.org/abs/2403.11865
Dataset: zenodo.org/records/1106...
1/7
November 12, 2024 at 8:02 PM