Free Essay

Deep Residual Learning for Image Recognition

In:

Submitted By basharaltest
Words 5761
Pages 24
Deep Residual Learning for Image Recognition
Kaiming He

Xiangyu Zhang
Shaoqing Ren
Microsoft Research

Jian Sun

Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [41] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers.
The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC
& COCO 2015 competitions1 , where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

20

10

56-layer

test error (%)

20

Abstract

training error (%)

arXiv:1512.03385v1 [cs.CV] 10 Dec 2015

{kahe, v-xiangz, v-shren, jiansun}@microsoft.com

56-layer
20-layer

10

20-layer
0

0

1

2

3

4

iter. (1e4)

5

6

0
0

1

2

3

4

5

6

iter. (1e4)

Figure 1. Training error (left) and test error (right) on CIFAR-10 with 20-layer and 56-layer “plain” networks. The deeper network has higher training error, and thus test error. Similar phenomena on ImageNet is presented in Fig. 4.

greatly benefited from very deep models.
Driven by the significance of depth, a question arises: Is learning better networks as easy as stacking more layers?
An obstacle to answering this question was the notorious problem of vanishing/exploding gradients [1, 9], which hamper convergence from the beginning. This problem, however, has been largely addressed by normalized initialization [23, 9, 37, 13] and intermediate normalization layers
[16], which enable networks with tens of layers to start converging for stochastic gradient descent (SGD) with backpropagation [22].
When deeper networks are able to start converging, a degradation problem has been exposed: with the network depth increasing, accuracy gets saturated (which might be unsurprising) and then degrades rapidly. Unexpectedly, such degradation is not caused by overfitting, and adding more layers to a suitably deep model leads to higher training error, as reported in [11, 42] and thoroughly verified by our experiments. Fig. 1 shows a typical example.
The degradation (of training accuracy) indicates that not all systems are similarly easy to optimize. Let us consider a shallower architecture and its deeper counterpart that adds more layers onto it. There exists a solution by construction to the deeper model: the added layers are identity mapping, and the other layers are copied from the learned shallower model. The existence of this constructed solution indicates that a deeper model should produce no higher training error than its shallower counterpart. But experiments show that our current solvers on hand are unable to find solutions that

1. Introduction
Deep convolutional neural networks [22, 21] have led to a series of breakthroughs for image classification [21,
50, 40]. Deep networks naturally integrate low/mid/highlevel features [50] and classifiers in an end-to-end multilayer fashion, and the “levels” of features can be enriched by the number of stacked layers (depth). Recent evidence
[41, 44] reveals that network depth is of crucial importance, and the leading results [41, 44, 13, 16] on the challenging
ImageNet dataset [36] all exploit “very deep” [41] models, with a depth of sixteen [41] to thirty [16]. Many other nontrivial visual recognition tasks [8, 12, 7, 32, 27] have also
1 http://image-net.org/challenges/LSVRC/2015/ and http://mscoco.org/dataset/#detections-challenge2015.

1

ImageNet test set, and won the 1st place in the ILSVRC
2015 classification competition. The extremely deep representations also have excellent generalization performance on other recognition tasks, and lead us to further win the
1st places on: ImageNet detection, ImageNet localization,
COCO detection, and COCO segmentation in ILSVRC &
COCO 2015 competitions. This strong evidence shows that the residual learning principle is generic, and we expect that it is applicable in other vision and non-vision problems.

x weight layer

F(x)

relu weight layer

F(x) + x

x identity relu

Figure 2. Residual learning: a building block.

are comparably good or better than the constructed solution
(or unable to do so in feasible time).
In this paper, we address the degradation problem by introducing a deep residual learning framework. Instead of hoping each few stacked layers directly fit a desired underlying mapping, we explicitly let these layers fit a residual mapping. Formally, denoting the desired underlying mapping as H(x), we let the stacked nonlinear layers fit another mapping of F(x) := H(x) − x. The original mapping is recast into F(x)+x. We hypothesize that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.
The formulation of F(x) + x can be realized by feedforward neural networks with “shortcut connections” (Fig. 2).
Shortcut connections [2, 34, 49] are those skipping one or more layers. In our case, the shortcut connections simply perform identity mapping, and their outputs are added to the outputs of the stacked layers (Fig. 2). Identity shortcut connections add neither extra parameter nor computational complexity. The entire network can still be trained end-to-end by SGD with backpropagation, and can be easily implemented using common libraries (e.g., Caffe [19]) without modifying the solvers.
We present comprehensive experiments on ImageNet
[36] to show the degradation problem and evaluate our method. We show that: 1) Our extremely deep residual nets are easy to optimize, but the counterpart “plain” nets (that simply stack layers) exhibit higher training error when the depth increases; 2) Our deep residual nets can easily enjoy accuracy gains from greatly increased depth, producing results substantially better than previous networks.
Similar phenomena are also shown on the CIFAR-10 set
[20], suggesting that the optimization difficulties and the effects of our method are not just akin to a particular dataset.
We present successfully trained models on this dataset with over 100 layers, and explore models with over 1000 layers.
On the ImageNet classification dataset [36], we obtain excellent results by extremely deep residual nets. Our 152layer residual net is the deepest network ever presented on
ImageNet, while still having lower complexity than VGG nets [41]. Our ensemble has 3.57% top-5 error on the

2. Related Work
Residual Representations. In image recognition, VLAD
[18] is a representation that encodes by the residual vectors with respect to a dictionary, and Fisher Vector [30] can be formulated as a probabilistic version [18] of VLAD. Both of them are powerful shallow representations for image retrieval and classification [4, 48]. For vector quantization, encoding residual vectors [17] is shown to be more effective than encoding original vectors.
In low-level vision and computer graphics, for solving Partial Differential Equations (PDEs), the widely used
Multigrid method [3] reformulates the system as subproblems at multiple scales, where each subproblem is responsible for the residual solution between a coarser and a finer scale. An alternative to Multigrid is hierarchical basis preconditioning [45, 46], which relies on variables that represent residual vectors between two scales. It has been shown
[3, 45, 46] that these solvers converge much faster than standard solvers that are unaware of the residual nature of the solutions. These methods suggest that a good reformulation or preconditioning can simplify the optimization.
Shortcut Connections. Practices and theories that lead to shortcut connections [2, 34, 49] have been studied for a long time. An early practice of training multi-layer perceptrons
(MLPs) is to add a linear layer connected from the network input to the output [34, 49]. In [44, 24], a few intermediate layers are directly connected to auxiliary classifiers for addressing vanishing/exploding gradients. The papers of [39, 38, 31, 47] propose methods for centering layer responses, gradients, and propagated errors, implemented by shortcut connections. In [44], an “inception” layer is composed of a shortcut branch and a few deeper branches.
Concurrent with our work, “highway networks” [42, 43] present shortcut connections with gating functions [15].
These gates are data-dependent and have parameters, in contrast to our identity shortcuts that are parameter-free.
When a gated shortcut is “closed” (approaching zero), the layers in highway networks represent non-residual functions. On the contrary, our formulation always learns residual functions; our identity shortcuts are never closed, and all information is always passed through, with additional residual functions to be learned. In addition, high2

way networks have not demonstrated accuracy gains with extremely increased depth (e.g., over 100 layers).

ReLU [29] and the biases are omitted for simplifying notations. The operation F + x is performed by a shortcut connection and element-wise addition. We adopt the second nonlinearity after the addition (i.e., σ(y), see Fig. 2).
The shortcut connections in Eqn.(1) introduce neither extra parameter nor computation complexity. This is not only attractive in practice but also important in our comparisons between plain and residual networks. We can fairly compare plain/residual networks that simultaneously have the same number of parameters, depth, width, and computational cost (except for the negligible element-wise addition).
The dimensions of x and F must be equal in Eqn.(1).
If this is not the case (e.g., when changing the input/output channels), we can perform a linear projection Ws by the shortcut connections to match the dimensions:

3. Deep Residual Learning
3.1. Residual Learning
Let us consider H(x) as an underlying mapping to be fit by a few stacked layers (not necessarily the entire net), with x denoting the inputs to the first of these layers. If one hypothesizes that multiple nonlinear layers can asymptotically approximate complicated functions2 , then it is equivalent to hypothesize that they can asymptotically approximate the residual functions, i.e., H(x) − x (assuming that the input and output are of the same dimensions). So rather than expect stacked layers to approximate H(x), we explicitly let these layers approximate a residual function
F(x) := H(x) − x. The original function thus becomes
F(x)+x. Although both forms should be able to asymptotically approximate the desired functions (as hypothesized), the ease of learning might be different.
This reformulation is motivated by the counterintuitive phenomena about the degradation problem (Fig. 1, left). As we discussed in the introduction, if the added layers can be constructed as identity mappings, a deeper model should have training error no greater than its shallower counterpart. The degradation problem suggests that the solvers might have difficulties in approximating identity mappings by multiple nonlinear layers. With the residual learning reformulation, if identity mappings are optimal, the solvers may simply drive the weights of the multiple nonlinear layers toward zero to approach identity mappings.
In real cases, it is unlikely that identity mappings are optimal, but our reformulation may help to precondition the problem. If the optimal function is closer to an identity mapping than to a zero mapping, it should be easier for the solver to find the perturbations with reference to an identity mapping, than to learn the function as a new one. We show by experiments (Fig. 7) that the learned residual functions in general have small responses, suggesting that identity mappings provide reasonable preconditioning.

y = F(x, {Wi }) + Ws x.

We can also use a square matrix Ws in Eqn.(1). But we will show by experiments that the identity mapping is sufficient for addressing the degradation problem and is economical, and thus Ws is only used when matching dimensions.
The form of the residual function F is flexible. Experiments in this paper involve a function F that has two or three layers (Fig. 5), while more layers are possible. But if
F has only a single layer, Eqn.(1) is similar to a linear layer: y = W1 x + x, for which we have not observed advantages.
We also note that although the above notations are about fully-connected layers for simplicity, they are applicable to convolutional layers. The function F(x, {Wi }) can represent multiple convolutional layers. The element-wise addition is performed on two feature maps, channel by channel.

3.3. Network Architectures
We have tested various plain/residual nets, and have observed consistent phenomena. To provide instances for discussion, we describe two models for ImageNet as follows.
Plain Network. Our plain baselines (Fig. 3, middle) are mainly inspired by the philosophy of VGG nets [41] (Fig. 3, left). The convolutional layers mostly have 3×3 filters and follow two simple design rules: (i) for the same output feature map size, the layers have the same number of filters; and (ii) if the feature map size is halved, the number of filters is doubled so as to preserve the time complexity per layer. We perform downsampling directly by convolutional layers that have a stride of 2. The network ends with a global average pooling layer and a 1000-way fully-connected layer with softmax. The total number of weighted layers is 34 in Fig. 3 (middle).
It is worth noticing that our model has fewer filters and lower complexity than VGG nets [41] (Fig. 3, left). Our 34layer baseline has 3.6 billion FLOPs (multiply-adds), which is only 18% of VGG-19 (19.6 billion FLOPs).

3.2. Identity Mapping by Shortcuts
We adopt residual learning to every few stacked layers.
A building block is shown in Fig. 2. Formally, in this paper we consider a building block defined as: y = F(x, {Wi }) + x.

(1)

Here x and y are the input and output vectors of the layers considered. The function F(x, {Wi }) represents the residual mapping to be learned. For the example in Fig. 2 that has two layers, F = W2 σ(W1 x) in which σ denotes
2 This

(2)

hypothesis, however, is still an open question. See [28].

3

VGG-19

image

7x7 conv, 64, /2

7x7 conv, 64, /2

pool, /2

pool, /2

pool, /2

3x3 conv, 256

3x3 conv, 64

3x3 conv, 64

3x3 conv, 256

3x3 conv, 64

3x3 conv, 64

3x3 conv, 256

3x3 conv, 64

3x3 conv, 64

3x3 conv, 256

3x3 conv, 64

3x3 conv, 64

3x3 conv, 64

3x3 conv, 64

3x3 conv, 64

Residual Network. Based on the above plain network, we insert shortcut connections (Fig. 3, right) which turn the network into its counterpart residual version. The identity shortcuts (Eqn.(1)) can be directly used when the input and output are of the same dimensions (solid line shortcuts in
Fig. 3). When the dimensions increase (dotted line shortcuts in Fig. 3), we consider two options: (A) The shortcut still performs identity mapping, with extra zero entries padded for increasing dimensions. This option introduces no extra parameter; (B) The projection shortcut in Eqn.(2) is used to match dimensions (done by 1×1 convolutions). For both options, when the shortcuts go across feature maps of two sizes, they are performed with a stride of 2.

34-layer residual

image

3x3 conv, 128

output size: 224

34-layer plain

image

3x3 conv, 64

3x3 conv, 64
3x3 conv, 64

output size: 112

output size: 56

output size: 28

pool, /2
3x3 conv, 128

pool, /2

3x3 conv, 128, /2

3x3 conv, 128, /2

3x3 conv, 512

3x3 conv, 128

3x3 conv, 128

3x3 conv, 512

3x3 conv, 128
3x3 conv, 128

3x3 conv, 128

3x3 conv, 512

3x3 conv, 128

3x3 conv, 128

3x3 conv, 128

3x3 conv, 128

3x3 conv, 128

3x3 conv, 128

3x3 conv, 128

3x3 conv, 128

3x3 conv, 256, /2

3x3 conv, 256, /2

3x3 conv, 512

3x3 conv, 256

3x3 conv, 256

3x3 conv, 512

3x3 conv, 256

3x3 conv, 256

3x3 conv, 512

3x3 conv, 256

3x3 conv, 256

3x3 conv, 512

3x3 conv, 256

3x3 conv, 256

pool, /2

3x3 conv, 256

3x3 conv, 256

3x3 conv, 256

3x3 conv, 256

3x3 conv, 256

3x3 conv, 256

3x3 conv, 256

3x3 conv, 256

3x3 conv, 256

3x3 conv, 256

3x3 conv, 256

3x3 conv, 256

3x3 conv, 256

3x3 conv, 512
3x3 conv, 512

3x3 conv, 512

3x3 conv, 512

3x3 conv, 512

3x3 conv, 512

3x3 conv, 512

3x3 conv, 512

fc 4096

avg pool

avg pool

fc 4096

output size: 1

3x3 conv, 512, /2

3x3 conv, 512

fc 1000

4. Experiments

3x3 conv, 256

3x3 conv, 512, /2
3x3 conv, 512

output size: 7

Our implementation for ImageNet follows the practice in [21, 41]. The image is resized with its shorter side randomly sampled in [256, 480] for scale augmentation [41].
A 224×224 crop is randomly sampled from an image or its horizontal flip, with the per-pixel mean subtracted [21]. The standard color augmentation in [21] is used. We adopt batch normalization (BN) [16] right after each convolution and before activation, following [16]. We initialize the weights as in [13] and train all plain/residual nets from scratch. We use SGD with a mini-batch size of 256. The learning rate starts from 0.1 and is divided by 10 when the error plateaus, and the models are trained for up to 60 × 104 iterations. We use a weight decay of 0.0001 and a momentum of 0.9. We do not use dropout [14], following the practice in [16].
In testing, for comparison studies we adopt the standard
10-crop testing [21]. For best results, we adopt the fullyconvolutional form as in [41, 13], and average the scores at multiple scales (images are resized such that the shorter side is in {224, 256, 384, 480, 640}).

3x3 conv, 128

3x3 conv, 512

output size: 14

3.4. Implementation

fc 1000

pool, /2

4.1. ImageNet Classification
We evaluate our method on the ImageNet 2012 classification dataset [36] that consists of 1000 classes. The models are trained on the 1.28 million training images, and evaluated on the 50k validation images. We also obtain a final result on the 100k test images, reported by the test server.
We evaluate both top-1 and top-5 error rates.
Plain Networks. We first evaluate 18-layer and 34-layer plain nets. The 34-layer plain net is in Fig. 3 (middle). The
18-layer plain net is of a similar form. See Table 1 for detailed architectures.
The results in Table 2 show that the deeper 34-layer plain net has higher validation error than the shallower 18-layer plain net. To reveal the reasons, in Fig. 4 (left) we compare their training/validation errors during the training procedure. We have observed the degradation problem - the

fc 1000

Figure 3. Example network architectures for ImageNet. Left: the
VGG-19 model [41] (19.6 billion FLOPs) as a reference. Middle: a plain network with 34 parameter layers (3.6 billion FLOPs).
Right: a residual network with 34 parameter layers (3.6 billion
FLOPs). The dotted shortcuts increase dimensions. Table 1 shows more details and other variants.

4

layer name output size conv1 112×112 conv2 x

56×56

conv3 x

18-layer

34-layer

3×3, 64
3×3, 64

×2

28×28

3×3, 128
3×3, 128

×2

conv4 x

14×14

3×3, 256
3×3, 256

×2

conv5 x

7×7

3×3, 512
3×3, 512

×2

1×1
1.8×109

FLOPs

50-layer
101-layer
152-layer
7×7, 64, stride 2
3×3 max pool, stride 2






1×1, 64
1×1, 64
1×1, 64
3×3, 64
 3×3, 64 ×3
 3×3, 64 ×3
 3×3, 64 ×3
×3
3×3, 64
1×1, 256
1×1, 256
1×1, 256






1×1, 128
1×1, 128
1×1, 128
3×3, 128
 3×3, 128 ×4
 3×3, 128 ×8
×4  3×3, 128 ×4
3×3, 128
1×1, 512
1×1, 512
1×1, 512






1×1, 256
1×1, 256
1×1, 256
3×3, 256
 3×3, 256 ×6  3×3, 256 ×23  3×3, 256 ×36
×6
3×3, 256
1×1, 1024
1×1, 1024
1×1, 1024






1×1, 512
1×1, 512
1×1, 512
3×3, 512
×3  3×3, 512 ×3  3×3, 512 ×3  3×3, 512 ×3
3×3, 512
1×1, 2048
1×1, 2048
1×1, 2048 average pool, 1000-d fc, softmax
3.6×109
3.8×109
7.6×109
11.3×109

Table 1. Architectures for ImageNet. Building blocks are shown in brackets (see also Fig. 5), with the numbers of blocks stacked. Downsampling is performed by conv3 1, conv4 1, and conv5 1 with a stride of 2.

50 error (%)

60

50 error (%)

60

40

34-layer
30

20
0

10

18-layer
30

18-layer

plain-18 plain-34 20

30 iter. (1e4)

40

40

ResNet-18
ResNet-34
20
0

50

10

34-layer
20

30 iter. (1e4)

40

50

Figure 4. Training on ImageNet. Thin curves denote training error, and bold curves denote validation error of the center crops. Left: plain networks of 18 and 34 layers. Right: ResNets of 18 and 34 layers. In this plot, the residual networks have no extra parameter compared to their plain counterparts.

18 layers
34 layers

plain
27.94
28.54

reducing of the training error3 . The reason for such optimization difficulties will be studied in the future.

ResNet
27.88
25.03

Residual Networks. Next we evaluate 18-layer and 34layer residual nets (ResNets). The baseline architectures are the same as the above plain nets, expect that a shortcut connection is added to each pair of 3×3 filters as in Fig. 3
(right). In the first comparison (Table 2 and Fig. 4 right), we use identity mapping for all shortcuts and zero-padding for increasing dimensions (option A). So they have no extra parameter compared to the plain counterparts.
We have three major observations from Table 2 and
Fig. 4. First, the situation is reversed with residual learning – the 34-layer ResNet is better than the 18-layer ResNet
(by 2.8%). More importantly, the 34-layer ResNet exhibits considerably lower training error and is generalizable to the validation data. This indicates that the degradation problem is well addressed in this setting and we manage to obtain accuracy gains from increased depth.
Second, compared to its plain counterpart, the 34-layer

Table 2. Top-1 error (%, 10-crop testing) on ImageNet validation.
Here the ResNets have no extra parameter compared to their plain counterparts. Fig. 4 shows the training procedures.

34-layer plain net has higher training error throughout the whole training procedure, even though the solution space of the 18-layer plain network is a subspace of that of the
34-layer one.
We argue that this optimization difficulty is unlikely to be caused by vanishing gradients. These plain networks are trained with BN [16], which ensures forward propagated signals to have non-zero variances. We also verify that the backward propagated gradients exhibit healthy norms with
BN. So neither forward nor backward signals vanish. In fact, the 34-layer plain net is still able to achieve competitive accuracy (Table 3), suggesting that the solver works to some extent. We conjecture that the deep plain nets may have exponentially low convergence rates, which impact the

3 We have experimented with more training iterations (3×) and still observed the degradation problem, suggesting that this problem cannot be feasibly addressed by simply using more iterations.

5

model
VGG-16 [41]
GoogLeNet [44]
PReLU-net [13] plain-34 ResNet-34 A
ResNet-34 B
ResNet-34 C
ResNet-50
ResNet-101
ResNet-152

top-1 err.

28.07
24.27

9.33
9.15
7.38

28.54
25.03
24.52
24.19
22.85
21.75
21.43

64-d

top-5 err.

10.02
7.76
7.46
7.40
6.71
6.05
5.71

3x3, 64 relu 3x3, 64

relu

VGG [41] (ILSVRC’14)
GoogLeNet [44] (ILSVRC’14)
VGG [41] (v5)
PReLU-net [13]
BN-inception [16]
ResNet-34 B
ResNet-34 C
ResNet-50
ResNet-101
ResNet-152

VGG [41] (ILSVRC’14)
GoogLeNet [44] (ILSVRC’14)
VGG [41] (v5)
PReLU-net [13]
BN-inception [16]
ResNet (ILSVRC’15)

relu

3x3, 64

relu

1x1, 256 relu parameter-free, identity shortcuts help with training. Next we investigate projection shortcuts (Eqn.(2)). In Table 3 we compare three options: (A) zero-padding shortcuts are used for increasing dimensions, and all shortcuts are parameterfree (the same as Table 2 and Fig. 4 right); (B) projection shortcuts are used for increasing dimensions, and other shortcuts are identity; and (C) all shortcuts are projections.
Table 3 shows that all three options are considerably better than the plain counterpart. B is slightly better than A. We argue that this is because the zero-padded dimensions in A indeed have no residual learning. C is marginally better than
B, and we attribute this to the extra parameters introduced by many (thirteen) projection shortcuts. But the small differences among A/B/C indicate that projection shortcuts are not essential for addressing the degradation problem. So we do not use option C in the rest of this paper, to reduce memory/time complexity and model sizes. Identity shortcuts are particularly important for not increasing the complexity of the bottleneck architectures that are introduced below.

top-1 err.

top-5 err.

24.4
21.59
21.99
21.84
21.53
20.74
19.87
19.38

8.43†
7.89
7.1
5.71
5.81
5.71
5.60
5.25
4.60
4.49

Table 4. Error rates (%) of single-model results on the ImageNet validation set (except † reported on the test set). method 1x1, 64

Figure 5. A deeper residual function F for ImageNet. Left: a building block (on 56×56 feature maps) as in Fig. 3 for ResNet34. Right: a “bottleneck” building block for ResNet-50/101/152.

Table 3. Error rates (%, 10-crop testing) on ImageNet validation.
VGG-16 is based on our test. ResNet-50/101/152 are of option B that only uses projections for increasing dimensions. method 256-d

top-5 err. (test)
7.32
6.66
6.8
4.94
4.82
3.57

Deeper Bottleneck Architectures. Next we describe our deeper nets for ImageNet. Because of concerns on the training time that we can afford, we modify the building block as a bottleneck design4 . For each residual function F, we use a stack of 3 layers instead of 2 (Fig. 5). The three layers are 1×1, 3×3, and 1×1 convolutions, where the 1×1 layers are responsible for reducing and then increasing (restoring) dimensions, leaving the 3×3 layer a bottleneck with smaller input/output dimensions. Fig. 5 shows an example, where both designs have similar time complexity.
The parameter-free identity shortcuts are particularly important for the bottleneck architectures. If the identity shortcut in Fig. 5 (right) is replaced with projection, one can show that the time complexity and model size are doubled, as the shortcut is connected to the two high-dimensional ends. So identity shortcuts lead to more efficient models for the bottleneck designs.
50-layer ResNet: We replace each 2-layer block in the

Table 5. Error rates (%) of ensembles. The top-5 error is on the test set of ImageNet and reported by the test server.

ResNet reduces the top-1 error by 3.5% (Table 2), resulting from the successfully reduced training error (Fig. 4 right vs. left). This comparison verifies the effectiveness of residual learning on extremely deep systems.
Last, we also note that the 18-layer plain/residual nets are comparably accurate (Table 2), but the 18-layer ResNet converges faster (Fig. 4 right vs. left). When the net is “not overly deep” (18 layers here), the current SGD solver is still able to find good solutions to the plain net. In this case, the
ResNet eases the optimization by providing faster convergence at the early stage.

4 Deeper non-bottleneck ResNets (e.g., Fig. 5 left) also gain accuracy from increased depth (as shown on CIFAR-10), but are not as economical as the bottleneck ResNets. So the usage of bottleneck designs is mainly due to practical considerations. We further note that the degradation problem of plain nets is also witnessed for the bottleneck designs.

Identity vs. Projection Shortcuts. We have shown that
6

34-layer net with this 3-layer bottleneck block, resulting in a 50-layer ResNet (Table 1). We use option B for increasing dimensions. This model has 3.8 billion FLOPs.
101-layer and 152-layer ResNets: We construct 101layer and 152-layer ResNets by using more 3-layer blocks
(Table 1). Remarkably, although the depth is significantly increased, the 152-layer ResNet (11.3 billion FLOPs) still has lower complexity than VGG-16/19 nets (15.3/19.6 billion FLOPs).
The 50/101/152-layer ResNets are more accurate than the 34-layer ones by considerable margins (Table 3 and 4).
We do not observe the degradation problem and thus enjoy significant accuracy gains from considerably increased depth. The benefits of depth are witnessed for all evaluation metrics (Table 3 and 4).

method
Maxout [10]
NIN [25]
DSN [24]
# layers
FitNet [35]
19
Highway [42, 43]
19
Highway [42, 43]
32
ResNet
20
ResNet
32
ResNet
44
ResNet
56
ResNet
110
ResNet
1202

We conducted more studies on the CIFAR-10 dataset
[20], which consists of 50k training images and 10k testing images in 10 classes. We present experiments trained on the training set and evaluated on the test set. Our focus is on the behaviors of extremely deep networks, but not on pushing the state-of-the-art results, so we intentionally use simple architectures as follows.
The plain/residual architectures follow the form in Fig. 3
(middle/right). The network inputs are 32×32 images, with the per-pixel mean subtracted. The first layer is 3×3 convolutions. Then we use a stack of 6n layers with 3×3 convolutions on the feature maps of sizes {32, 16, 8} respectively, with 2n layers for each feature map size. The numbers of filters are {16, 32, 64} respectively. The subsampling is performed by convolutions with a stride of 2. The network ends with a global average pooling, a 10-way fully-connected layer, and softmax. There are totally 6n+2 stacked weighted layers. The following table summarizes the architecture:
16×16
2n
32

8.39
7.54 (7.72±0.16)
8.80
8.75
7.51
7.17
6.97
6.43 (6.61±0.16)
7.93

so our residual models have exactly the same depth, width, and number of parameters as the plain counterparts.
We use a weight decay of 0.0001 and momentum of 0.9, and adopt the weight initialization in [13] and BN [16] but with no dropout. These models are trained with a minibatch size of 128 on two GPUs. We start with a learning rate of 0.1, divide it by 10 at 32k and 48k iterations, and terminate training at 64k iterations, which is determined on a 45k/5k train/val split. We follow the simple data augmentation in [24] for training: 4 pixels are padded on each side, and a 32×32 crop is randomly sampled from the padded image or its horizontal flip. For testing, we only evaluate the single view of the original 32×32 image.
We compare n = {3, 5, 7, 9}, leading to 20, 32, 44, and
56-layer networks. Fig. 6 (left) shows the behaviors of the plain nets. The deep plain nets suffer from increased depth, and exhibit higher training error when going deeper. This phenomenon is similar to that on ImageNet (Fig. 4, left) and on MNIST (see [42]), suggesting that such an optimization difficulty is a fundamental problem.
Fig. 6 (middle) shows the behaviors of ResNets. Also similar to the ImageNet cases (Fig. 4, right), our ResNets manage to overcome the optimization difficulty and demonstrate accuracy gains when the depth increases.
We further explore n = 18 that leads to a 110-layer
ResNet. In this case, we find that the initial learning rate of 0.1 is slightly too large to start converging5 . So we use
0.01 to warm up the training until the training error is below
80% (about 400 iterations), and then go back to 0.1 and continue training. The rest of the learning schedule is as done previously. This 110-layer network converges well (Fig. 6, middle). It has fewer parameters than other deep and thin

4.2. CIFAR-10 and Analysis

32×32
1+2n
16

# params
2.5M
2.3M
1.25M
0.27M
0.46M
0.66M
0.85M
1.7M
19.4M

Table 6. Classification error on the CIFAR-10 test set. All methods are with data augmentation. For ResNet-110, we run it 5 times and show “best (mean±std)” as in [43].

Comparisons with State-of-the-art Methods. In Table 4 we compare with the previous best single-model results.
Our baseline 34-layer ResNets have achieved very competitive accuracy. Our 152-layer ResNet has a single-model top-5 validation error of 4.49%. This single-model result outperforms all previous ensemble results (Table 5). We combine six models of different depth to form an ensemble
(only with two 152-layer ones at the time of submitting).
This leads to 3.57% top-5 error on the test set (Table 5).
This entry won the 1st place in ILSVRC 2015.

output map size
# layers
# filters

error (%)
9.38
8.81
8.22

8×8
2n
64

When shortcut connections are used, they are connected to the pairs of 3×3 layers (totally 3n shortcuts). On this dataset we use identity shortcuts in all cases (i.e., option A),

5 With an initial learning rate of 0.1, it starts converging (

Similar Documents

Free Essay

Deep Learning Wikipedia

...Deep Learning more at http://ml.memect.com Contents 1 Artificial neural network 1 1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2.1 Improvements since 2006 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3.1 Network function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3.2 Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3.3 Learning paradigms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3.4 Learning algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.4 Employing artificial neural networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.5.1 Real-life applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.5.2 Neural networks and neuroscience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.6 Neural network software ...

Words: 55759 - Pages: 224

Premium Essay

Abcd

...Acting Out: The individual deals with emotional conflict or internal or external stressors by actions rather than reflections or feelings. This definition is broader than the original concept of the acting out of transference feelings or wishes during psychotherapy and is intended to include behavior arising both within and outside the transference relationship. Defensive acting out is not synonymous with “bad behavior” because it requires evidence that the behavior is related to emotional conflicts.   Affiliation: The individual deals with emotional conflict or internal or external stressors by turning to others for help or support. This involves sharing problems with others but does not imply trying to make someone else responsible for them.   Aim inhibition: Placing a limitation upon instinctual demands; accepting partial or modified fulfillment of desires. Examples: (1) a person is conscious of sexual desire but if finding it frustrating, "decides" that all that is really wanted in the relationship is companionship. (2) a student who originally wanted to be a physician decides to become a physician's assistant.   Aim inhibition, like the other mechanisms, is neither healthful nor pathological, desirable nor undesirable, in itself. It may be better to have half a loaf than no bread, but an unnecessary aim inhibition may rob one of otherwise attainable satisfactions.   Note that the first example could include...

Words: 2694 - Pages: 11

Premium Essay

Motivation and Job Satisfaction

...CHAPTER 1 INTRODUCTION 1.1. Background of the Study Several studies have explored the link between work motivation and job satisfaction; however, different papers continue to confirm conflicting results between the two items. Certain research results have confirmed that job satisfaction and work motivation have a direct and positive correlation, while other research results points out that the two have negative correlation. The present changing international business milieu has unmistakably improved the need to make HR an essential and an important business partner. The changes derive from the rapid speed of globalization, individual organizational changes, competition for the increasing intellectual resource and advances in technology are continuing to present novel and intricate challenges for HR functions and organizations in general. Work motivation and job satisfaction, two key elements that defines today is HRM and have gained more value in deployment and redeployment of talent as the shifts have had serious results to the roles and directions of HR leadership. Non-conventional resource management has in general, played important roles in business performance of business but this is slowly changing as HR takes center stage, this new ways that organizations have attempted to employ employee motivation to tap into job satisfaction, which in turn translates to higher employee output. Certain papers examined the challenges for businesses in creating and promoting the...

Words: 6110 - Pages: 25

Free Essay

Artificial Neural Network for Biomedical Purpose

...whole or part, in any publication of which they are the author, and to make other personal use of the work. Any republication, referencing or personal use of the work must explicitly identify the original source. Statements and opinions expressed in the chapters are these of the individual contributors and not necessarily those of the editors or publisher. No responsibility is accepted for the accuracy of information contained in the published articles. The publisher assumes no responsibility for any damage or injury to persons or property arising out of the use of any materials, instructions, methods or ideas contained in the book. Publishing Process Manager Ivana Lorkovic Technical Editor Teodora Smiljanic Cover Designer Martina Sirotic Image Copyright Bruce Rolff, 2010. Used under license from Shutterstock.com First published March, 2011 Printed in India A free online edition of this book is available at www.intechopen.com Additional hard copies can be obtained from orders@intechweb.org Artificial Neural Networks - Methodological Advances and Biomedical Applications Edited by Kenji Suzuki p. cm. ISBN 978-953-307-243-2 free online editions of InTech Books and Journals can be found at www.intechopen.com Contents Preface Part 1...

Words: 43079 - Pages: 173

Premium Essay

Employee Motivation

...BUSINESS Dissertation Effect of Work Motivation on Employment Satisfaction (Case Study of Employees at Tesco) Name: Student Registration: Module Leader: Workshop Tutor: Due Date: Executive Summary List of Abbreviations List of Tables and Figures Table of Contents Executive Summary 2 List of Abbreviations 3 List of Tables and Figures 4 CHAPTER 1 7 INTRODUCTION 7 1.1. Background of the Study 7 1.2. Rationale for the Study 9 1.3. Problem Statement 10 1.4. Research Objectives 10 1.5. Summary 10 CHAPTER TWO 11 LITERATURE REVIEW 11 2.1. Introduction 11 2.2. How Motivation can Enhance Performance 12 2.3. Factors Affecting Employee Motivation 12 2.4. Motivating Employees at the Workplace 13 2.5. Theories of Motivation 14 2.5.1. Maslow’s Hierarchy of Needs Theory 14 2.5.2. Herzberg’s Two Factor Theory 15 2.5.3. PERMA model 17 2.5.4. Financial 18 2.5.5. Non-financial 19 CHAPTER 3 21 METHODOLOGY 21 3.1. Introduction 21 3.2. Research Design 21 3.3. Population of the Study 21 3.4. Sample Frame 22 3.4.1. Questionnaire Instrument 22 3.4.2. Data Collection 24 3.5. Validity and Reliability 25 3.7. Data Analysis 25 CHAPTER FOUR 26 DATA PRESENTATION AND ANALYSIS 26 4.1. Introduction 26 3.2. Report on Findings of Specific Objectives 27 3.3. Report on Findings of General Objectives 29 CHAPTER 5 31 CONCLUSION 31 REFERENCES 33 CHAPTER 1 INTRODUCTION 1.1. Background of the Study...

Words: 7367 - Pages: 30

Free Essay

Psychology Notes

...Introduction to Psychology: January 12, 2015 3 Main Problems of Psychology 1) Determinism vs. Freewill * The idea that everything that happens has a cause (determinism) versus the belief that behavior is cause by a person’s independent decisions (freewill) 2) The Mind-Brain Problem * The philosophical question of how experience relates to the brain. 3) The Nature-Nurture Issue * “How do differences in behavior relate to differences in heredity and environment?” Intro to Psych: Wednesday, January 14 2015 Three major philosophical issues with psychology: Free Will vs. Determinism - Determinism: Everything that happens has a cause. - Free Will: the belief that behavior is cause by a person’s independent decisions The Mind-Brain Problem - The philosophical question of how experience relates to the brain. - How is brain activity linked with our experienced? - There is a close relationship with brain activity and psychological events - “Do we feel first, or do we think first?” Nature-Nurture Issue - “How do differences in behavior relate to differences in heredity and environment?” Milgram and the shock experiment test Psychiatry - different from psychology in the way that a psychiatrist can prescribe medication and psychologists can not. - branch of the medical field that focuses on the brain and mental disorders **Get to know both of the “What Psychologists Do” handouts from class Quick History of Psychology Early...

Words: 7984 - Pages: 32

Free Essay

For a Sociology of Worth

...For a Sociology of Worth David Stark Columbia University and the Santa Fe Institute Department of Sociology Columbia University 1180 Amsterdam Ave New York, NY 10027 dcs36@columbia.edu Forthcoming in Vando Borghi and Tommaso Vitale, editors, Le convenzioni del lavoro, il lavoro delle convenzioni, numero monografico di Sociologia del Lavoro, n. 102, Milano: Franco Angeli. For a Sociology of Worth David Stark Columbia University and the Santa Fe Institute Parsons’ Pact Arguably, the founding moment of the field of economic sociology took place more than a half-century ago at Harvard, where Talcott Parsons was developing his grand designs for sociology. Parsons’ ambitions were imperial, but there was one field that Parsons maneuvered around instead of claiming outright. That field was hegemonic in his time and is considerably hegemonic still – the discipline of economics. Parsons, therefore, made overt signals to his colleagues in the Economics Department at Harvard alerting them to his ambitious plans and assuring them that he had no designs on their terrain (see Camic 1987). Basically, Parsons made a pact: in my gloss – you, economists, study value; we, the sociologists, will study values. You will have claim on the economy; we will stake our claim on the social relations in which economies are embedded. What have been the effects of Parsons’ Pact? First, by limiting its range, this jurisdictional division of the social sciences placed constraints on sociology...

Words: 8730 - Pages: 35

Premium Essay

Bp Oil

...Final Report on the Investigation of the     Macondo Well Blowout  Deepwater Horizon Study Group  March 1, 2011 The Deepwater Horizon Study Group (DHSG) was formed by members of the Center for Catastrophic Risk Management (CCRM) in May 2010 in response to the blowout of the Macondo well on April 20, 2010. A fundamental premise in the DHSG work is: we look back to understand the why‘s and how‘s of this disaster so we can better understand how best to go forward. The goal of the DHSG work is defining how to best move forward – assessing what major steps are needed to develop our national oil and gas resources in a reliable, responsible, and accountable manner. Deepwater Horizon Study Group Investigation of the Macondo Well Blowout Disaster This Page Intentionally Left Blank Deepwater Horizon Study Group Investigation of the Macondo Well Blowout Disaster In Memoriam  Jason Anderson Senior tool pusher Dewey Revette Driller Stephen Curtis Assistant driller Donald Clark Assistant driller Dale Burkeen Crane operator Karl Kleppinger Roughneck Adam Weise Roughneck Shane Roshto Roughneck Wyatt Kemp Derrick man Gordon Jones Mud engineer Blair Manuel Mud engineer 1 Deepwater Horizon Study Group Investigation of the Macondo Well Blowout Disaster In Memoriam The Environment 2 Deepwater Horizon Study Group Investigation of the Macondo Well Blowout Disaster Table of Contents  In Memoriam....................................................................

Words: 49923 - Pages: 200

Free Essay

Berger Report

...1. Introduction: Managing the function of each department efficiently and flexible payment procedure is enough for a corporation to achieve success. A prudent multinational company (MNC) management should always try to make an appropriate balance between the effectiveness and efficiency of profitability. The product and service flexibility with in short time gives the customer ultimate satisfaction, which attracts the more clients for the company. The sales will be automatically high and increase the commission and profit of the company. As a pioneer MNC in paint industry, Berger is able to do so to earn the trust of general people. Berger has the glory of being oldest MNC in paint sector owned a portion by the general people of Bangladesh and it servers the nation for last more than 60 years with largest line of diversified home, indoor, outdoor decorative services in different sectors. 2. Background of the Report: Raising competition from Paint and non paint competitors and continuing development of innovative ways to provide financial services are all contributing to a growing interest in evaluating Berger’s performance. Various groups of individuals are particularly interested in evaluating Berger’s performance. This project is about evaluating the Berger Paints Bangladesh Ltd’s market share & customer satisfaction & how it can be improved. This is an internship project where I shall be trying to evaluate the overall Market share & customer satisfaction...

Words: 14917 - Pages: 60

Free Essay

Pyc4808

...in: R.A. Meyers (ed.), Encyclopedia of Physical Science & Technology (3rd ed.), (Academic Press, New York, 2001). Cybernetics and Second-Order Cybernetics Francis Heylighen Free University of Brussels Cliff Joslyn Los Alamos National Laboratory Contents I. Historical Development of Cybernetics....................................................... 1 A. Origins..................................................................................... 1 B. Second Order Cybernetics............................................................ 2 C. Cybernetics Today...................................................................... 4 II. Relational Concepts................................................................................ 5 A. Distinctions and Relations........................................................... 5 B. Variety and Constraint ................................................................ 6 C. Entropy and Information.............................................................. 6 D. Modelling Dynamics .................................................................. 7 III. Circular Processes................................................................................... 8 A. Self-Application......................................................................... 8 B. Self-Organization ....................................................................... 9 C. Closure .....................................................................................

Words: 12122 - Pages: 49

Free Essay

Technology

...INSTITUTE OF PHYSICS PUBLISHING Bioinsp. Biomim. 1 (2006) P1–P12 BIOINSPIRATION & BIOMIMETICS doi:10.1088/1748-3182/1/1/P01 PERSPECTIVE Biomimetics—using nature to inspire human innovation Yoseph Bar-Cohen Jet Propulsion Lab, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109-8099, USA E-mail: yosi@jpl.nasa.gov Received 7 November 2005 Accepted for publication 7 March 2006 Published 27 April 2006 Online at stacks.iop.org/BB/1/P1 Abstract Evolution has resolved many of nature’s challenges leading to lasting solutions. Nature has always inspired human achievements and has led to effective materials, structures, tools, mechanisms, processes, algorithms, methods, systems, and many other benefits (Bar-Cohen Y (ed) 2005 Biomimetics—Biologically Inspired Technologies (Boca Raton, FL: CRC Press) pp 1–552). This field, which is known as biomimetics, offers enormous potential for inspiring new capabilities for exciting future technologies. There are numerous examples of biomimetic successes that involve making simple copies, such as the use of fins for swimming. Others examples involved greater mimicking complexity including the mastery of flying that became possible only after the principles of aerodynamics were better understood. Some commercial implementations of biomimetics, including robotic toys and movie subjects, are increasingly appearing and behaving like living creatures. More substantial benefits of biomimetics include the development of prosthetics...

Words: 9606 - Pages: 39

Free Essay

Big Data

...McKinsey Global Institute June 2011 Big data: The next frontier for innovation, competition, and productivity The McKinsey Global Institute The McKinsey Global Institute (MGI), established in 1990, is McKinsey & Company’s business and economics research arm. MGI’s mission is to help leaders in the commercial, public, and social sectors develop a deeper understanding of the evolution of the global economy and to provide a fact base that contributes to decision making on critical management and policy issues. MGI research combines two disciplines: economics and management. Economists often have limited access to the practical problems facing senior managers, while senior managers often lack the time and incentive to look beyond their own industry to the larger issues of the global economy. By integrating these perspectives, MGI is able to gain insights into the microeconomic underpinnings of the long-term macroeconomic trends affecting business strategy and policy making. For nearly two decades, MGI has utilized this “micro-to-macro” approach in research covering more than 20 countries and 30 industry sectors. MGI’s current research agenda focuses on three broad areas: productivity, competitiveness, and growth; the evolution of global financial markets; and the economic impact of technology. Recent research has examined a program of reform to bolster growth and renewal in Europe and the United States through accelerated productivity growth; Africa’s economic potential;...

Words: 60035 - Pages: 241

Premium Essay

Employee Relation

...Assessor name: U YE MYINT Unit Number and title: Unit 20: Sales planning and Operations Qualification: Pearson BTEC Level 5 HND Diploma in Business Submitted by: YAMIN MYO TINT Sales Planning and Operations for Toe Company Contents Introduction 2 Objectives 3 Executive Summary 4 LO 1 Understand the role of personal selling within the overall marketing strategy 6 Task (1.1) Explain how personal selling supports the promotion mix 6 Task (1.2) Compare buyer behavior and the decision making process in different situations 10 “Buying Behavior” 10 Task (1.3) analyze the role of sales teams within marketing strategy 16 LO 2: Be able to apply the principles of the selling process to a product or service. 21 Task (2.1) Prepare a sales presentation for a product or service 21 Task (2.2) Carry out sales presentation for a product or service. 25 LO 3: Understand the role and objectives of sales management 27 Task 3.1 explain how sales strategies are developed in line with corporate objectives 27 Task (3.2) explain the importance of recruitment and selection procedures 34 Task (3.3) Evaluate the role of motivation, remuneration and training sales management 37 Task (3.4) Explain how sales management organize sales activity and control sales output 48 Task (3.5) Explain the use of databases in effective sales management 51 LO 4: Be able to plan sales activity for a product or service 52 Task (4.1) Develop a sales plan for a product or service 52 Task (4.2) Investigate opportunities...

Words: 16647 - Pages: 67

Premium Essay

Coping Mechanism of Ruralist Migrated to Urban

...Coping Mechanisms: strategies and outcomes. Coping with Crisis and Overwhelming affect: Employing coping mechanisms in the acute inpatient context. Isabel Clarke Consultant Clinical Psychologist Address for Correspondence: Isabel Clarke, Consultant Clinical Psychologist, AMH Woodhaven, Loperwood, Calmore, Totton SO40 2TA Email: isabel.clarke@hantspt-sw.nhs.uk Website: www.isabelclarke.org Abstract When mental health breaks down, the human being grasps at ways of coping with the crisis. The goal of coping is escape from intolerable affect and the means are familiar as 'symptoms' of mental illness. For example, to shut down physically and cease to compete is depression (Gilbert 1992), and drugs and alcohol provide a straightforward way out. As psychological therapists, our task is to devise, evaluate and, most importantly, persuade the client to adopt alternative, healthier, ways of coping; ways that offer less immediate relief, but which do not trap the person in a diminished quality of life. By explaining breakdown in terms of coping with intolerable affect, this approach, developed and evaluated in an acute hospital setting (Durrant, Clarke & Wilson 2007), enables us to offer more adapted skills for coping with affect as the solution. This 'third wave Cognitive Behavior Therapy (CBT)' approach (Hayes, Strosahl, & Wilson, 1999) takes seriously the discontinuities in human information processing (Teasdale & Barnard 1993) and employs mindfulness...

Words: 8338 - Pages: 34

Free Essay

Nit-Silchar B.Tech Syllabus

...NATIONAL INSTITUTE OF TECHNOLOGY SILCHAR Bachelor of Technology Programmes amï´>r¶ JH$s g§ñWmZ, m¡Úmo{ à VO o pñ Vw dZ m dY r V ‘ ñ Syllabi and Regulations for Undergraduate PROGRAMME OF STUDY (wef 2012 entry batch) Ma {gb Course Structure for B.Tech (4years, 8 Semester Course) Civil Engineering ( to be applicable from 2012 entry batch onwards) Course No CH-1101 /PH-1101 EE-1101 MA-1101 CE-1101 HS-1101 CH-1111 /PH-1111 ME-1111 Course Name Semester-1 Chemistry/Physics Basic Electrical Engineering Mathematics-I Engineering Graphics Communication Skills Chemistry/Physics Laboratory Workshop Physical Training-I NCC/NSO/NSS L 3 3 3 1 3 0 0 0 0 13 T 1 0 1 0 0 0 0 0 0 2 1 1 1 1 0 0 0 0 4 1 1 0 0 0 0 0 0 2 0 0 0 0 P 0 0 0 3 0 2 3 2 2 8 0 0 0 0 0 2 2 2 2 0 0 0 0 0 2 2 2 6 0 0 8 2 C 8 6 8 5 6 2 3 0 0 38 8 8 8 8 6 2 0 0 40 8 8 6 6 6 2 2 2 40 6 6 8 2 Course No EC-1101 CS-1101 MA-1102 ME-1101 PH-1101/ CH-1101 CS-1111 EE-1111 PH-1111/ CH-1111 Course Name Semester-2 Basic Electronics Introduction to Computing Mathematics-II Engineering Mechanics Physics/Chemistry Computing Laboratory Electrical Science Laboratory Physics/Chemistry Laboratory Physical Training –II NCC/NSO/NSS Semester-4 Structural Analysis-I Hydraulics Environmental Engg-I Structural Design-I Managerial Economics Engg. Geology Laboratory Hydraulics Laboratory Physical Training-IV NCC/NSO/NSS Semester-6 Structural Design-II Structural Analysis-III Foundation Engineering Transportation Engineering-II Hydrology &Flood...

Words: 126345 - Pages: 506