Running the 3-D Solver

Running the 3-D Solver  3D解析の実行
With trackers tracked, and coordinates and lens setting configured, you are ready to obtain the 3-D solution.
Solving Modes
Switch to theSolve control panel. Select the solver mode as follows:
·        Auto: the normal automatic 3-D mode for a moving camera, or a moving object.
·        Refine: after a successful Auto solution, use this to rapidly update the solution after making minor changes to the trackers or coordinate system settings.
·        Tripod: camera was on a camera, track pan/tilt/roll(/zoom) only.
·        Refine Tripod: same as Refine, but for Tripod-mode tracking.
·        From Seed Points: use six or more known 3-D tracker positions per frame to begin solving (typically, when most trackers have existing coordinates from a 3-D scan or architectural plan). You can use Place mode in the perspective view to put seed points on the surface of an imported mesh. Turn on the Seed button on the coordinate system panel for such trackers. You will often make them locks as well.
·        From Path: when the camera path has previously been tracked, estimated, or imported from a motion-controlled camera. The seed position, and orientation, andfield of viewof the camera must be approximately correct.
·        Indirect: to estimate based on trackers linked to another shot, for example, a narrow-angle DV shot linked to wide-angle digital camera stills. SeeMulti-shot tracking.
·        Individual: when the trackers are all individual objects buzzing around, used formotion and facial capturewith multiple cameras.
·        Disabled: when the camera is stationary, and an object viewed through it will be tracked.
The solving mode mostly controls how the solving process is started: what data is considered to be valid, and what is not. The solving process then proceeds pretty much the same way after that, subject to whatever constraints have been set up.
Automatic-Mode Directional Hint.
When the solver is in Automatic mode, a secondary drop-down list activates: a hint to tell SynthEyes in which direction the camera moved—specifically between the Begin and End frames on the Solver Panel. This secondary dropdown is normally in Automatic mode also. However, on difficult solves you can use the directional hint (Left, Right, Upwards, Downwards, Push In, Pull Back) to tell SynthEyes where to concentrate its efforts in determining a suitable solution. Here it has been changed:
dirhint
World Size
Adjust theWorld Sizeon the solver panel to a value comparable to the overall size of the 3-D set being tracked, including the position of the camera. The exact value isn’t important. If you are shooting in a room 20’ across, with trackers widely dispersed in it, use 20’. But if you are only shooting items on a desktop from a few feet away, you might drop down to 10’.
Important: the world size doesnotcontrol the size of the scene, that is the job of thecoordinate system setup.
The world size is used to stabilize some internal mathematics during solving; essentially all the coordinates are divided by it internally, so that the coordinates stay near 1 even if raised to a large power. Then after the calculation, the world size is multiplied back in. This process improves your computer's accuracy.
Choose your coordinate system to keep the entire scene near the origin, as measured in multiples of the world size. If all your trackers will be 1000 world-sizes from the origin (for example, near [1000000,0,0] with a world size of 1000), accuracy might be affected. The Shift Constraints tool can help move them all if needed.
As you see, the world size does not affect the calculation directly at all. Yet a poorly chosen world size can sabotage a solution. If you have a marginal solve, sometimes changing the world size a little can produce a different solution, maybe even the right one.

The world size also is used to control the size of some things in the 3-D views and during export: we might set the size of an object representing a tracker to be 2% of the world size, for example.

カメラの位置を含めて、トラッキングされている3Dのセットの全体的な寸法に相当する値に、解析パネルの上でワールド・サイズWorld Sizeをセットしてください。
正確な値は、重要でありません。
トラッカーがまんべんなく広がっている幅20の部屋で撮影しているのならば、20を使ってください。
しかし、部屋が20でも、2、3フィート離れて机上の物を撮影しているのならば、10に落とすべきかもしれません。
重要:
ワールド・サイズは、シーン(つまり座標システム設定coordinate system setup.の仕事)のサイズをコントロールしません。
ワールド・サイズは、解析の間、内部の数式を安定させるのに用いられます;
基本的にすべての座標は内部的にそれによって分けられます、そのため、たとえ大きな値に増やしたとしても、座標は1の近くにとどまります。
それから計算の後、ワールド・サイズが掛け算されます。このプロセスは、コンピュータの正確さを利用します。

ワールド・サイズの倍数で計算されるように、原点の近くでシーン全体を保つために、座標システムを選んでください。
すべてのトラッカーが原点から1000ワールド-サイズ(たとえば、ワールド・サイズ1000で、近い[1000000,0,0])であるならば、正確さは影響を受けるかもしれません。
移動制約ツール(Shift Constraints tool)は、必要ならばそれらのすべてを動かすのに役立つことができます。
御覧の通り、ワールド・サイズは、まったく直接計算に影響を及ぼしません。
それでも、十分に選ばれていないワールド・サイズは、解析を破壊する可能性があります。

限界の解析の場合、時々、少しワールド・サイズを変えることは異なる解析(多分正しい結果)をもたらすことができるでしょう。
ワールド・サイズも、3Dのビューで、そして、エクスポートで、一部のもののサイズをコントロールするのに用いられます:
たとえば、ワールド・サイズの2%であるトラッカーを意味しているobjectのサイズをセットするかもしれません。

 

Go!
You’re ready, set, so hitGo!on the Solver panel. SynthEyes will pop up a monitor window and begin calculating. Note that if you have multiple cameras and objects tracked, they will all be solved simultaneously, taking inter-object links into accounts. If you want to solve only one at a time, disable the others.
The calculation time will depend on the number of trackers and frames, the amount of errors in the trackers, the amount of perspective in the shot, the number of confoundingly wrong trackers, the phase of the moon, etc. For a 100-frame shot with 120 trackers, a 2-second time might be typical. With hundreds or thousands of trackers and frames, some minutes may be required, depending on processor speed. Shots with several thousand frames can be solved, though it may take some hours.
It is not possible to predict a specific number of iterations or time required for solving a scene ahead of time, so the progress bar on the solving monitor window reflects the fraction of the frames and trackers that are currently included in the tentative solution it is working on. SynthEyes can be very busy even though the progress bar is not changing, and the progress bar can be at 100% and the job still is not done yet — though it will be once the current round of iterations completes.
 
During Solving
If you are solving a lengthier shot where trackers come and go, and where there may be some tracking issues, you can monitor thequalityof the solving from the messages displayed.
As it solves, SynthEyes is continually adjusting its tentative solution to become better and better (“iterating”). As it iterates, SynthEyes displays the field of view and total error on the main (longest) shot. You can monitor this information to determine if success is likely, or if you should stop the iterations and look for problems.
SynthEyes will also display the range of frames it is adding to the solution as it goes along. This is invaluable when you are working on longer shots: if you see the error suddenly increase when a range of frames is added, you can stop the solve and check the tracking in that range of frames, then resume.
You can monitor the field of view to see if it is comparable to what you think it should be — either an eyeballed guess, or if you have some data from an on-set supervisor. If it does not seem good to start, you can turn onSlow but sureand try again.
Also, you can watch for a common situation where the field of view starts to decrease more and more until it gets down to one or two degrees. This can happen if there are some very distant trackers which should be labeledFaror if there are trackers on moving features, such as a highlight, actor, or automobile.
If the error suddenly increases, this usually indicates that the solver has just begun solving a new range of frames that is problematic.
Your processor utilization is another source of information. When the tracking data is ambiguous, usually only on long shots, you will see the message “Warning: not a crisp solution, using safer algorithm” appear in the solving window. When this happens, the processor utilization on multi-core machines will drop, because the secondary algorithm is necessarily single-threaded. If you haven’t already, you should check for trackers that should be “far” or for moving trackers.
After Solving
Though having a solution might seem to be the end of the process, in fact, it's only the …middle. Here's a quick preview of things to do after solving, which will be discussed in more detail in further sections.
·        Check the overall errors
·        Look for spikes in tracker errors and the camera or object path
·        Examine the 3-D tracker positioning to ensure it corresponds to the cinematic reality.
·        Add, modify, and delete trackers to improve the solution.
·        Add or modify the coordinate system alignment
·        Add and track additional moving objects in the shot
·        Insert 3-D primitives into the scene for checking or later use
·        Determine position or direction of lights
·        Convert computed tracker positions into meshes
·        Export to your animation or compositing package.
Once you have an initial camera solution, you can approximately solve additional trackers as you track them, usingZero-Weighted Trackers(ZWTs).
RMS Errors
The solver control paneldisplays the root-mean-square (RMS) error for the selected camera or object, which is how many pixels, on average, each tracker is from where it should be in the image. [In more detail, the RMS average is computed by taking a bunch of error numbers, squaring them, dividing by the number of numbers to get the average square, then taking the square root of that average. It's the usual way for measuring how big errors are, when the error can be both positive and negative. A regular average might come out to zero even if there was a lot of error!]
The RMS error should be under 1 pixel, preferably under 0.5 for well-tracked features. Note that during solving, the popup will show an RMS error that can be larger, because it is contains contributions from any constraints that have errors. Also, the error during solving is for ALL of the cameras and objects combined; it is converted from internal format to human-readable pixel error using the width of the longest shot being solved for. The field of view of that shot is also displayed during solving.
There is an RMS error number for each tracker displayed on the coordinate system and tracker panels. The tracker panel also displays the per-frame error, which is the number being averaged.
Checking the Lens
You should immediately check the lens panel's field of view, to make sure that there is a plausible value. A very small value generally indicates that there are bad trackers, severe distortion, or that the shot has very little perspective (an object-mode track of a distant object, say).
Solving Issues
If you encounter the message "Can't find suitable initial frames", it means that there is limited perspective in the shot, or that the Constrain button is on, but the constrained trackers are not simultaneously valid. Turn on the checkboxes next to Begin and End frames on the Solver panel, and select two frames with many trackers in common, where the camera or object rotates around 30 degrees between the two frames. You will see the number of trackers in common between the two frames, you want this to be as high as possible. Make sure the two frames have a large perspective change as well: a large number of trackers will do no good if they do not also exhibit a perspective change. Also, it will be a good idea to turn on the "Slow but sure" checkbox. 
You may encounter "size constraint hasn't been set up" under various circumstances. If the solving process stops immediately, probably you have no trackers set up for the camera or object cited. Note that if you are doing a moving object shot, you need to set the camera's solving mode to Disabled if you are not tracking it also, or you will get this message.
When you are tracking both a moving camera and a moving object, you need to have a size constraint for the camera (one way or another), and a size constraint for the object (one way or another). So you need TWO size constraints. It isn't immediately obvious to many people why TWO size constraints are needed. This is the related to a well-known optical illusion, relied on in shooting movies such as "Honey, I Shrunk the Kids". Basically, you can't tell the difference between a little thing moving around a little, up close, and a big thing moving around a lot, farther away. You need the two size constraints to set the relative proportions of the foreground (object) and background (camera).
The related message “Had to add a size constraint, none provided” is informational, and does not indicate a problem.
If you have SynthEyes scenes with multiple cameras linked to one another, you should keep the solver panel's Constrain button turned on to maintain proper common alignment.
See also theTroubleshootingsection.

タグ:

+ タグ編集
  • タグ:

このサイトはreCAPTCHAによって保護されており、Googleの プライバシーポリシー利用規約 が適用されます。

最終更新:2009年04月09日 01:31
ツールボックス

下から選んでください:

新しいページを作成する
ヘルプ / FAQ もご覧ください。