[技術] 改善VR可能使得你的電腦無法負荷(上)
原文:
https://www.theregister.co.uk/2018/01/16/human_limits_of_vr_and_ar/
作者:David Matthews, 16 Jan 2018
-----------------------------------------------
Put on a virtual reality headset and it's hard to believe that your visual
system is being stretched beyond its limit. Individual pixels are still
visible, and the narrow field of view makes it feel like you're wearing ski
googles.
可能很難相信,當你戴上VR頭盔,你的視覺系統可以被擴展 到超越極限。每個像素都清
晰可見,狹窄的視野讓你感覺像是戴著滑雪鏡。
Yet even now VR bombards our visual system with more information than it can
process. Engineers are grappling with how to make headset displays match up
with what we can biologically handle. If they fail, VR could hit a ceiling
where it requires too much computing power to make virtual worlds look
realistic.
現在VR正努力解決資訊處理能力不足的問題并試圖挑戰人類的視覺系統。工程師們正努力
讓VR頭盔的畫面可以跟上人類生理的處理能力。如果他們失敗了,VR可能碰到電腦計算能
力不足的瓶頸,進而導致VR不夠真實。
One of the key challenges is that most of the pixels currently displayed by
headsets are, in some sense, wasted. To understand why, consider how we see.
Our vision consists of a high-resolution fovea in the centre, surrounded by a
much more blurry periphery, which has evolved to be good at detecting motion
but far worse at fine detail and colour.
其中一個難題是,現有技術在某種程度上,浪費了頭盔的顯示像素。其原因是人類的視覺
是由眼中心的眼窩及其周邊所構成。眼窩有能力處理高解析度的資訊,眼周則對物體移動
十分敏感,但對於細節及顏色的處理能力不佳。
"All of our fine resolution takes place in the central one degree," says Tim
Meese, professor of vision science at Aston University. Graphics card maker
Nvidia has calculated that around 96 per cent of the pixels in a VR headset
are viewed in our periphery, rather than the central fovea.
阿斯頓大學的視覺科學教授Tim Meese指出:“人類眼睛的中心區域有較好的視覺解析度
”。顯示卡製造商NVIDIA估算,現在的VR頭盔大約有96%的像素被眼周而不是在中心的眼
窩所接收。
Foveated rendering
中心渲染
The technological race is therefore on to develop "foveated rendering":
headsets that display a small, high-resolution spot that follows where we are
looking, but a steadily lower quality image in our periphery, in order to
save on processing power.
上述的問題導致了這場VR科技競賽將著重在“中心渲染”的技術上。中心渲染技術是指,
VR頭盔在視野中心區域顯示高畫質,而在周邊區域顯示相對低的畫質以節省電腦的資源。
There's a parallel between foveated rendering and how evolution has honed our
visual system. Both processes are about "where best should we put the effort"
of visualising the world, argues Meese.
Meese教授表示,中心渲染與人類視覺系統的演化息息相關。眼窩及眼周兩種視覺都是為
了使人類更有效率的處理環境的資訊。
By some estimates, given a field of view of 180 degrees, around 74 gigabytes
of visual data are available to us each second, but only around 125 megabytes
are ultimately processed, he explains, "so a lot less" than we could
theoretically take in.
“據估算,假設視角是180度,每秒人類大概會接受74GB的視覺資訊,但只有大概125MB的
資訊會被重點處理。理論上,很大一部分資訊我們無法處理而忽視掉了。”
Our peripheral vision can spot movement – useful when there is a lion
sneaking up in the bushes, for example – which allows us to then use our
fine-detail fovea to see if there really is a predator about to pounce, Meese
says. But if our entire visual field had the same acuity as our fovea, we'd
need an optic nerve and brain perhaps a hundred times bigger to process all
the information.
人類的眼周視覺擅長處理動作訊息,例如可以看到偷偷溜進草叢的獅子。同時也使得我們
的眼窩可以專心處理其他細節-例如確認是不是真的有猛獸準備攻擊我們了。但如果人類所有
視覺區域都跟眼窩一樣敏銳,那我們的視覺神經需要再粗幾百倍才能處理所有的資訊。
Yet certain quirks of our visual system make foveated rendering more complex
than it might seem. For a start, simply blurring the periphery in a headset
has made some users experience tunnel vision. Nvidia shed some light on why:
our ability to detect the presence of contrast – the difference between
light and dark objects – in our peripheral vision peters out more slowly
than our ability to resolve details.
人類視覺系統的某些特征使得中心渲染技術的發展比想象更加複雜。首先,單純的模糊眼
周的顯示畫質會使得一些使用者產生所謂的“隧道視線”。NVIDIA解釋,這是由於眼周視
覺處理對比訊息的速度要低於眼窩處理細節訊息的速度。對比指的是物體明暗的不同。
"If you over-blur the periphery, your visual system doesn't have anything to
latch on to out there, and it makes you feel nauseated and dizzy," explains
Dave Luebke, vice president of graphics research at the GPU giant.
顯示卡巨頭NVIDIA圖像研發部的副主任Dave Luebke表示:如果過度降低眼周的顯示畫質
,你的視覺系統會認為眼周區域沒有訊息需要處理,進而讓你感到頭暈及惡心。
We are also surprisingly good at picking out human faces in our peripheral
vision, meaning they might have to be rendered in more detail than other
objects. "There is indeed evidence that you can sense the average expression
of faces in the crowd without ever looking at them directly," he says.
另一個問題是,比起其他類訊息,即使只是眼角瞄到,我們的眼周視覺依然很擅長人臉的
辨識。很多證據支持,即使沒有直視,你仍然可以在一群人中辨識出大概的臉部表情。
Foveated rendering also requires headsets that can track where you are
looking, something seemingly not lost on the big tech companies. In June last
year, reports surfaced that Apple had bought SensoMotoric Instruments, a
company based near Berlin which has demonstrated foveated rendering in
current-generation headsets.
中心渲染技術同時要求頭盔要能追蹤使用者的視線,這類的技術已經有很多科技公司開始
研究了。去年六月,有報道指出,蘋果已經買下了SensoMotoric Instruments。該柏林近
郊的公司已經成功的將中心渲染的技術應用在現有的VR頭盔上。
-----------------------------------------------
先翻一半,如果有什麼術語的錯誤或是翻譯不順,請指正,謝謝。
--
※ 發信站: 批踢踢實業坊(ptt.cc), 來自: 68.146.73.68
※ 文章網址: https://www.ptt.cc/bbs/VR/M.1516485111.A.DCA.html
→
01/21 10:26,
6年前
, 1F
01/21 10:26, 1F
→
01/21 10:26,
6年前
, 2F
01/21 10:26, 2F
→
01/21 10:26,
6年前
, 3F
01/21 10:26, 3F
→
01/21 10:28,
6年前
, 4F
01/21 10:28, 4F
→
01/21 10:29,
6年前
, 5F
01/21 10:29, 5F
→
01/21 10:31,
6年前
, 6F
01/21 10:31, 6F
→
01/21 10:36,
6年前
, 7F
01/21 10:36, 7F
→
01/21 10:36,
6年前
, 8F
01/21 10:36, 8F
→
01/21 10:36,
6年前
, 9F
01/21 10:36, 9F
→
01/21 10:39,
6年前
, 10F
01/21 10:39, 10F
→
01/21 10:39,
6年前
, 11F
01/21 10:39, 11F
推
01/21 10:45,
6年前
, 12F
01/21 10:45, 12F
→
01/21 10:45,
6年前
, 13F
01/21 10:45, 13F
→
01/21 10:53,
6年前
, 14F
01/21 10:53, 14F
→
01/21 10:53,
6年前
, 15F
01/21 10:53, 15F
→
01/21 10:53,
6年前
, 16F
01/21 10:53, 16F
→
01/21 10:53,
6年前
, 17F
01/21 10:53, 17F
→
01/21 10:53,
6年前
, 18F
01/21 10:53, 18F
→
01/21 10:55,
6年前
, 19F
01/21 10:55, 19F
→
01/21 10:55,
6年前
, 20F
01/21 10:55, 20F
→
01/21 11:01,
6年前
, 21F
01/21 11:01, 21F
→
01/21 11:01,
6年前
, 22F
01/21 11:01, 22F
→
01/21 11:06,
6年前
, 23F
01/21 11:06, 23F
→
01/21 11:17,
6年前
, 24F
01/21 11:17, 24F
→
01/21 11:41,
6年前
, 25F
01/21 11:41, 25F
→
01/21 11:41,
6年前
, 26F
01/21 11:41, 26F
→
01/21 11:41,
6年前
, 27F
01/21 11:41, 27F
→
01/21 11:42,
6年前
, 28F
01/21 11:42, 28F
→
01/21 11:42,
6年前
, 29F
01/21 11:42, 29F
→
01/21 11:43,
6年前
, 30F
01/21 11:43, 30F
→
01/21 11:44,
6年前
, 31F
01/21 11:44, 31F
→
01/21 11:44,
6年前
, 32F
01/21 11:44, 32F
→
01/21 11:46,
6年前
, 33F
01/21 11:46, 33F
→
01/21 11:50,
6年前
, 34F
01/21 11:50, 34F
→
01/21 11:50,
6年前
, 35F
01/21 11:50, 35F
→
01/21 11:51,
6年前
, 36F
01/21 11:51, 36F
→
01/21 11:54,
6年前
, 37F
01/21 11:54, 37F
→
01/21 11:54,
6年前
, 38F
01/21 11:54, 38F
→
01/21 11:55,
6年前
, 39F
01/21 11:55, 39F
推
02/05 00:11,
6年前
, 40F
02/05 00:11, 40F
→
02/05 00:12,
6年前
, 41F
02/05 00:12, 41F
→
02/05 00:13,
6年前
, 42F
02/05 00:13, 42F
→
02/05 00:13,
6年前
, 43F
02/05 00:13, 43F
→
02/05 00:14,
6年前
, 44F
02/05 00:14, 44F
→
02/05 00:14,
6年前
, 45F
02/05 00:14, 45F
→
02/05 00:16,
6年前
, 46F
02/05 00:16, 46F
→
02/05 00:17,
6年前
, 47F
02/05 00:17, 47F
→
02/05 00:25,
6年前
, 48F
02/05 00:25, 48F
→
02/05 00:28,
6年前
, 49F
02/05 00:28, 49F
推
02/05 00:34,
6年前
, 50F
02/05 00:34, 50F
→
02/05 00:35,
6年前
, 51F
02/05 00:35, 51F
→
02/05 00:36,
6年前
, 52F
02/05 00:36, 52F
→
02/05 00:37,
6年前
, 53F
02/05 00:37, 53F
→
02/14 15:20,
6年前
, 54F
02/14 15:20, 54F