系統設計者的話:學英文根本不用背,你講中文,你有在背中文嗎?學英文也是一樣的道理,那是自然而然就會的東西,用背的方式最後肯定都是不行的。 更多...

最好的學習就是有英文的父母,不斷的跟你說話、耐心的教導你,時間長了,你也就慢慢就融入其中了。

但是我們沒有英文父母怎麼辦?看影片學英文就學的起來嗎?學不起來。

我們從小到大,也看過了不少英文電影,至少也看過1、200部也跑不掉,但是為什麼看了這麼多的英文電影,英文還是很爛呢?

主要的原因是我們沒有從基礎性的強化學習。舉個例子,西方的幼兒出生時,他至少聽了各種大人講英文至少聽了2年,才慢慢開始說英文、與大人互動。

而且都是一字一句慢慢的說、慢慢的教、漸進式的、慢慢的、越來越習慣、越來越聽的懂、越來越會說的過程。

那麼我們看的影片、電影過程中,都是大人成熟型的講話速度、方式、語調,我們怎麼可能跟的上?

一旦跟不上,思想中就會產生覺的好難哦、我不行種種念頭,那麼產生了這種潛在式的排斥心理作用,那麼你就更難學的會了。


所以我們必須打好基礎、深厚的基礎,萬丈高樓平地起,唯有打下深厚的基礎,後面你走的路才會越來越順。

程式界裏有句話,程式債欠的越多,最後就只能放棄。學英文也是一樣,英文單字債欠的越多,你就直接上來文法、句型、語調,最後也只能放棄。

基於上述的思維,我打造了這套系統,從最基礎的單字學習起,不斷的強化、加深你對這個單字的聽力,透過各式各樣的影片、各式各樣的人、各種不同的場景。

就只是為了訓練一個單字的聽力,無數的人跟你講同一個單字,講個20遍、50篇、100遍(且不重覆),我就不信學不起來。


這套系統做出來後,我自己在使用的過程中,確實真的能聽出來他們講的這個單字,而且他們講的速度再快也能聽的出來。

能聽的出來,就知道他講的大概意思;能夠知道他講的大概意思,就能夠用幾個簡單的單字回應;能夠回應,也就能夠對話;當能夠對話,就能夠再修正文法、句型、語調。


我的目地就是希望自己至少能聽的懂人家在說什麼,用英文跟人家對話,大家共同努力、學習。若有其它想法或對這套系統設計的建議,也可以與我聯絡:ichich2013@gmail.com

XLA: TensorFlow, Compiled! (TensorFlow Dev Summit 2017)
觀看次數:685
tensorflow
extensible
we've
interpretive
it's
tensorflow
dataflow
nodes
executes
that's
contraflow
it's
owed
expressiveness
you're
constrained
you're
programming
environments
it's
modular
flow
sensor
flowed
exemplified
we're
compilation
xla
compiler
we've
you'll
tensorflow
excellet
that's
we're
pasting
floating-point
explicitly
cpu
tensorflow
underscore
gpu
boilerplate
it's
launching
threads
launching
gpu
threads
runtime
tensor
flow
gpu
tensorflow
compilation
there's
i've
just-in-time
compiler
what's
just-in-time
runtime
tensor
flow
it's
defined
tensorflow
gist
just-in-time
compilation
excellet
slots
tensor
flow
ecosystem
capabilities
we'll
clusters
explicitly
devices
complementary
compilation
flip
tensorflow
handles
comp
compiled
targets
devices
jit
compilation
specialization
we've
benchmark
whims
micro
benchmarks
in-house
syntax
latency
reductions
200
microseconds
5
microseconds
latency
compiling
just-in-time
compilation
tensorflow
runtime
powerpc
x86
cpu
26
megabytes
600
we'll
optimizer
that's
reusable
toolkit
optimizations
devices
we've
parameterised
optimization
toolkit
you're
there's
optimization
optimizations
toolkit
prioritization
workload
recap
compilation
we're
compiler
dispatcher
we're
we're
compile-time
unroll
vectorize
optimizations
today's
specializing
we're
optimizations
cpus
gpus
fundamentally
decomposed
tensor
flow
softmax
that's
hand-coded
c++
softmax
dots
adds
reduces
exps
decomposition
macro
diffused
optimized
expressible
tensor
flow
tensor
flow
we're
combinatorial
primitives
macro
back-end
cpu
gpu
llvm
runtime
plugin
gpu
backends
effectively
jit
compilation
you're
prototyping
compilation
you're
compiling
you're
compilation
cache
here's
optimized
embedded
devices
latency
embedded
devices
flow
ecosystem
contraflow
integrates
directories
there's
directories
i'm
unsurprisingly
compiler
it's
graphs
lowering
graphs
representations
compiler
toolkit
it's
compiler
it's
subdirectory
tf2
xla
tf2
unsurprisingly
it's
tensorflow
we've
runtime
executes
there's
components
i've
labeled
notes
softmax
softmax
that's
tensorflow
tensorflow
add-in
softmax
tensorflow
runtime
we've
swapped
tensorflow
theme
slides
swapped
tensorflow
kernels
swapped
kernels
executing
softmax
exponentiation
tensor
flow
graphs
kernels
they're
haven't
tensorflow
op
that's
haven't
compiler
tensor
flow
just-in-time
compilation
transformations
i'm
we've
trapezoid
we've
compiled
nodes
we've
clustering
tensorflow
runtime
it's
simplified
one's
trapezoid
clustering
clusters
compiled
bubbles
attentively
non-trivial
it's
clustering
right-hand
we've
nodes
jaded
nodes
jit
compilation
we've
conceptually
you're
you've
clusters
graphs
jit
compilation
one's
explicitly
scopes
tensor
flow
a-and
just-in-time
compilation
couldn't
compiled
that's
we'll
we'll
we'll
inputs
outputs
c++
header
conceptually
arguments
fetches
arguments
that's
that's
reducing
binary
sizes
computations
tensorflow
extraordinarily
inputs
outputs
binaries
currently
haven't
that's
header
there's
conceptually
couldn't
protobuf
you're
we're
identifying
inputs
re-emphasize
compilers
it's
sizes
we're
similarly
basile
basel
we've
here's
here's
fetches
computation
computation
we're
matrix
multiplication
that's
invoking
compiler
subdirectory
tensorflow
compiler
tf
tex
describing
what's
tensor
flow
we've
gmt
that's
neural
we've
alex
net-like
automatically
fused
computation
streaming
parallelization
highlighting
examples
condemning
you'd
benchmarks
we've
micro
benchmarks
we're
implementations
fused
kernels
tensorflow
tensorflow
wap
slowdowns
redress
prioritize
we're
tensor
currently
tensorflow
op
parallelism
backends
sequential
megabytes
downloaded
binary
reductions
3d
it's
compiled
android
megabyte
tensor
flow
runtime
bytes
330
kilobytes
88
kilobytes
depending
sizes
implementations
matrix
multiplications
graphs
compiled
vxl
compilation
they're

加Line好友 每日8AM 推播1單字

加入好友

其他人正在學習的單字

2024-04-27 21:18
2024-04-27 21:18
3. than
2024-04-27 21:18
2024-04-27 21:18
2024-04-27 21:18
2024-04-27 21:18
2024-04-27 21:18
2024-04-27 21:18
9. made
2024-04-27 21:17
10. me
2024-04-27 21:17
2024-04-27 21:17
12. stream
2024-04-27 21:17
13. print
2024-04-27 21:17
14. jumbo
2024-04-27 21:17
15. ogre
2024-04-27 21:17

本月觀看影片單字排行榜

本月查閱單字排行榜

本月聆聽單字排行榜