Pages

Sunday, 7 April 2013

【中爱新闻】一周数字:叙利亚,财政赤字,客流量,堵车时间···

每周,一些带有特殊含义的数字或统计数据会被精选出来,以帮助读者总结和消化一周的要闻。


26 —— 这是参加叙利亚战争的爱尔兰士兵的人数,根据英国的统计,这个数字使得爱尔兰成为阿拉伯世界以外人均参战人数最高的国家。

63欧元 —— 一位英国部长所声称的每周基本生存开支,他此言目的在于回应民众对英国政府削减社会福利的批评。

3/5 —— 被自己所辅导的学生进行身体攻击的特护助手的比例。

11,682 —— 三月份爱尔兰的汽车销售量,这一数字与去年同比下降了14个百分点。

37亿欧元 —— 爱尔兰今年第一季度的财政赤字规模,较去年同期减少了568欧元。

481万欧元 —— 2012年爱尔兰国家政府部门的公务出行、薪酬相关支出总额,近半数的开支有司法部执行,主要用于总理、财长、总统、大法官和大司法的公车。

42,000 —— 即将面临新一轮的警方审讯的在岗教师人数,该数字由于新的法律对审讯与弱势群体一起工作的人作出的相关规定而产生。

23 – 下一年度的赞助额将有所调整的小学数量,这个数字来源于对爱尔兰38个镇的一项调查。

近8,000 —— 2012年上半年向警察局报案的手机被盗数量,随着愿意向警察局报案的人数增加,这个数据还会更大。

390万 —— 爱尔兰机场今年第一季度的客流量。

37分钟 —— 都柏林交通最高峰时段行车的增加时间,根据有些机构的评估结果,都柏林是欧洲在最拥堵城市排名中排在第11位,这较2011年的排名已经明显下降。

关键词:一周;数字


Friday, 5 April 2013

【中爱新闻】爱尔兰央行因出口疲软下调2013年的经济增长预期

<央行表示治理不良贷款的进展缓慢正在延长财政部门不确定性的时间>

由于在对外出口的需求疲软,央行已经降低了对本年度经济增长速度的预期。央行预期2013年的经济增长率为1.2%,同时预估下一年度将有2.5%的增长率。在最新一期的季度公报中,央行表示今年的就业率将获得小幅增长,到2014年总体失业率有望降至14%一下。

作为对IMF(国际货币基金组织)本周早些时候对爱尔兰经济的批评所做的回应,央行还表示由于爱尔兰的银行在治理不良贷款问题上进展缓慢,与财政部门相关的经济不确定性的时间正在被延长,他们认为这已经导致处于困境中的借款人获得持久解决方案的时间被延迟。

央行同时向政府发出警告,希望政府有力地执行本年度的财政预算计划。公报中说道:“不折不扣地执行业已公布的预算措施对于维持市场信息和对经济的负面冲击发挥缓冲作用,仍然是必不可少的。”

央行还表示爱尔兰的竞争力正在朝正确的方向前进,然而要使整个经济恢复到稳定增长的状态还需要进一步的改善。

关键词:央行;经济增长;出口


Thursday, 4 April 2013

【中爱新闻】爱尔兰医学组织一致投票否决克罗克公园协议II

爱尔兰医学组织(Irish Medical Organisation, IMO)表示该组织成员将不会受到爱尔兰工会(Irish Congress of Trade Unions)的影响而向罗克公园协议II(Croke Park II )的提案投赞成票。
<IMO 成员在今天的会议上投下反对票>

爱尔兰医学组织推迟了今天年会的既定进程,紧急召开了一个针对克罗克公园协议II提案的讨论,而与会代表们似乎早就协商一致了要对这份新的提案说“不”。
在这一为期三天的年会召开之初,爱尔兰医学组织就表示他们的成员不会受到爱尔兰工会的影响而接受克罗克公园协议II的提议,“因为这项提议将大大削减爱尔兰医学组织各成员的薪酬,并延长他们的工作时间。”爱尔兰医学组织委员会曾表示,如果爱尔兰工会的公共服务委员会在本月晚些时候批准罗克公园协议II,他们同样会考虑自己的将作出的选择。“我们需要向政府表明我们的成员已经无法承受更多的工作量”,爱尔兰医学组织(IMO)同时表示。
爱尔兰医学组织,同爱尔兰护士和助产士组织(Irish Nurses and Midwives’ Organisation)、民事公诉与服务联盟(Civil Public and Services Union)以及UNITE一道,于2月24日(星期六)退出了针对克罗克公园协议II的讨论,因为他们认为这项提案除了增加工作时间、影响对病人的护理以及削减公共部门员工的薪水外,对于解决解决财政问题毫无帮助。
目前,爱尔兰医学组织将继续转向讨论他们的既定议程事宜,其中包括堕胎政策和其前任行政长官的退休金问题。
关键词:爱尔兰医学组织;罗克公园协议II;否决票

【中爱新闻】堕胎和减薪被提到医学年会最高议事日程


<IMO对克罗克公园协议提案提出反对>

为期三天的爱尔兰医学组织(Irish Medical Organisation, IMO)年会今天晚些时候在基拉尼(Killarney)拉开帷幕。

由于该组织目前处于集中进行内部审查阶段,今年的会议将不同于以往各年。与会者将就由该组织前任行政长官乔治·麦可尼斯(George McNeice)970万欧元退休金的争议所导致的一系列后续问题展开辩论。

今年的年会上有两个议题非常引人关注,一个议题与堕胎政策有关,支持放松对堕胎的规定,使得在以下情形中允许提供堕胎服务:(1)当对母亲的生命已经具有了实质性的实际风险的情形;(2)由于犯罪行为而怀孕的情形;(3)胎儿已经有不可矫正的畸形的情形。

另一个议题涉及医生减薪,那些在没有会诊医生的医院工作的医生将强调克罗克公园协议II(Croke Park Agreement II)中提出的超长工作时间的危险性,全科医生们则将要求卫生部长詹姆斯·赖利(James Reilly)反对关于进一步减少持有医疗卡的患者的医疗费的计划。

IMO明确反对克罗克公园协议II,他们有可能因此面临政府通过立法手段来强制执行新政,而这有可能激起整个医疗行业的集体行动。

关键词:爱尔兰医学组织(IMO);医生;全科医生(GPs);健康


来源:http://www.rte.ie/news/2013/0404/379657-imo/

Wednesday, 3 April 2013

【中爱新闻】爱尔兰教育部提出增进学校招生公平的法案


<根据j教育部的法案,家长不再仅仅凭借支付费用来为子女申请学校>

爱尔兰教育部长鲁拉里·奎恩(Ruairi Quinn)将在最来几个月内公布一项旨在增进学校招生政策进一步公平化的立法草案。这份立法草案致力于促使招生政策更加公平合理,特别是对于新迁到一个地区的学生和其他特殊学生群体,如来自旅游社区的学生等。

该法案的公布距教育部公布的一份关于学校招生政策的讨论稿已经将近2年时间。法案提出了促进入学系统对所有学生更加公平化的条款,包括对废除给予学校往届学生或教职工的子女优先入学权的规定。

教育部长鲁拉里·奎恩昨天表示他将很快将该法案提交到内阁。根据这一法案,学生家长将不再仅仅凭借支付费用来为子女申请学校。该法案还将宣布学校在接受学生前对家长和学生进行面试的做法不合法。鲁拉里·奎恩还透露这份法案可能会在未来几个月内向公众公布,以收集意见和接受咨询。

关键词:教育;入学;鲁拉里·奎恩


来源:http://www.rte.ie/news/2013/0403/379503-education-school-enrolment/

Tuesday, 2 April 2013

【中爱新闻】欧盟统计局:2月份爱尔兰青年人口失业率增长


<2月份爱尔兰30.8%的青年人失业增长>

欧盟统计局(Eurostat)的数据显示,2月份的爱尔兰失业青年人数增长。

2月份爱尔兰25岁以下人口的失业率约为30.8%,比1月份增加了0.4个百分点,但是较2012年同期数据有所下降。与此同时,本月欧元区的青年人口失业率则为23.9%——比上月下降了0.1个百分点——泛欧盟区(wider European Union)的这一数据则为23.5%。

在失业人数上,欧元区本月的青年失业人数将近360万,比去年同期增长了18.8万。欧盟统计局预计欧元区2月份总体失业人数为1907万,占全区人口总数的12%。该数据与上年同期相比增加了1.1个百分点,但是较上月仅有小幅增加。而爱尔兰本月的总体失业率为14.2%,与前两个月持平。根据统计结果,去年2月份这一数据为15.1%。

欧盟统计局表示,在过去的一年里,欧元区有19个成员国的失业率增长、8个成员国的失业率下降。西班牙2月份的失业率已增至26.3%,高于去年同期2.4个百分点。塞浦路斯的失业率也在14%的基础上增长了3.8个百分点。

关键词:失业;欧盟;青年



Sunday, 24 March 2013

Some Enjoyable Ways to Improve Your English


  • Going to the pub/bar
Pub or bar (I am not very clear about the difference) is a quite informal, relaxed social venue. You can have good practice of you English when you are there. This is because
    • You have more people to talk with. Generally you go the pub with a number of friends or colleagues or classmate. Those who you do not have chance of talk or talk much with when you work or study can talk or even talk a lot with you. Pub is exactly the place to know more people. Everyone there is generally open to talk with people.
    • You have less worry about your mistakes and difficulties in expression. As this is a very informal and relaxed associate, you will not be anxious if you cannot successfully or correctly express your meaning. You can make yourself be understood through body languages, or showing pictures in the mobile phone, etc. Generally you chat with only one or two people, you and them have time to "work together" to make clear what you want to talk about. 
    • You are braver to speak out. This is partly because you have less anxiety (the last point), partly due to stimulation of alcohol and the atmosphere. Beer or wind, and the exciting environment in the pub make you more excited to talk.
    • You need to speak loudly. Pub is a noisy place. You need to "shout" when you talk with someone. This gives a good practice for your pronunciation and also give your more self-confidence. 
  • Finding a native speaker mate
This gives motivations to improve your English from may aspects.
    • To attract someone's attention, you need to be able to express your ideas properly, and perhaps nicely as well.
    • To date and flirt with someone, you need to able to understand her/him clearly, and also express yourself clearly. You can learn from her/him as well when you chat.
    • To find out common topics between you and her/him, you probably need to know more things in her/his daily life. For examples, some bands, some stars, some movies, and also news.
    • You probably need to write to her/him. This encourages you to improve your writing.
    • To get along well with her/him over a long time, you probably need to learn and understand the culture she/he is familiar with. Then you need to read some materials.


Friday, 22 February 2013

What to write in a birthday card

  1. Happy, happy, happy birthday!! Don't forget your a year older.
  2. Thanks for inviting us all to come celebrate your birthday!! Hope the cake is good :) All the best.
  3. Wishing you a very Happy Birthday!! We hope you have a great day and all your wishes come true.
  4. Wishing you a very Happy Birthday!! We hope you have a wonderful day and get spoilted with gifts!
  5. Happy Birthday!! We hope all your dreams and wishes come true
  6. Happy Birthday!! Wishing you all the best for today and in the future. Now let's PARTY!!!
  7. Better late than never so HAPPY BIRTHDAY!! Wishing you all the best my friend - All the best.
  8. Happy birthday old [man/lady]!
  9. Happy birthday you oldie but goodie!! We hope all your wishes come true.
  10. Wishing you a happy [X]th birthday. All the best.
  11. HAPPY BIRTHDAY!!! Wishing you a great year ahead.
  12. Happy Birthday!! You're now older and hopefully wiser - Have a great day.
  13. Card messages messages aren't my thing - Happy Birthday!
  14. Happy birthday [name]!! We hope you have a great day and get spoilt rotten! Love you lots.
  15. Wishing you a great birthday!! All the best and we hope you get lots of presents!! All the best.
  16. Happy birthday to my best friend!! Wishing you all the best and we hope you have a great year ahead
  17. Wishing you all the best on your birthday - We hope you get spoiled with lots of presents!! All the best.
  18. Happy Birthday!! Tonight will be a big one - All the best.
  19. Wishing you a great day, year, century(just joking) - HAPPY BIRTHDAY!! Hope you have a great day.
  20. Happy birthday oldie!! We'll be pumping the music up tonight just so you can hear it.
  21. Happy birthday!! Wishing you all the best on your special day.

Source: http://www.greeting-card-messages.com/what-to-write-in-a-birthday-card.php

Monday, 18 February 2013

New Economic Thinking

New economic thinking for me involves the study of economic phenomena from a perspective which sees economic systems as being non-linear and dynamic. This approach is new because it models the interactions among agents in a more complex and realistic way than in much of the standard economics. The complexity approach enables us to gain an alternative understanding of how aggregate-level properties emerge from micro-level behaviours.

In my own research, on cooperation in agricultural collectives, researches have argued that households in collectives tend to shirk collectively because shirking is a rational choice for individual households, and as a consequence mutual shirking (i.e. non-cooperation) results in Nash equilibrium. Such logic is widely used to explain the failure of agricultural collectivization. However, this argument fails to explain the existence of successful agricultural collectives, low efficient agricultural collectives (e.g. People’s communes in China) that are sustained for long periods of time and the emergence of private farms (e.g. household responsibility system in China) from strict collectives. I believe this is because using standard economics approach to model agricultural collectives it is not possible to model all the non-linear dynamics that can be found in real agricultural collectives. A complexity approach makes it possible to model the complex interactions between households, and between households and government. It is also possible to include aggregate-level features, like social cognition and trust that emerge as a consequence of the long-term interactions in collectives, which can impact on individual level decision making processes. These interactions shape and reshape the way households behave in various ways and at different times, and being able to include them in economic models will better enable us to understand economic phenomena.

In order to model economic phenomena as complex and non-linear systems it is possible to use agent-based simulation, which is a more flexible means of modelling than equation-based modelling. Using agent-based models it is possible to create heterogeneous agents (e.g. households, collectives) that have multiple attributes (e.g. marginal productivity of effort) and preferences (e.g. preference for risk), as well as being able to conduct bottom-up analysis, test deviations from rational choice theory, and include multiple ideas from across the social sciences.

Complexity economics is able to improve economic thinking in a number of different aspects. The demand for new economic thinking comes from a number of arenas.
  1. The research objects, economic phenomena, economics faces are complex. The complexity keeps growing as the increase of communication and interaction amongst economic actors. The fact requires economist to improve the way they deal with the complexity in economic systems.
  2. The public, as the final consumer of economic analysis, have been let down by poor economic predictions. Much of the public has little faith in the ability of economists to provide accurate information about the economy since the 2008 financial crisis. It may be possible to regains their trust by Taking the complexity of economic systems into consideration is necessary to improve economist’s work.
  3. Complexity economics offers a new paradigm of examining economic phenomena. This paradigm, different from reductionism that standard economics applies, emphasizes non-linear dynamics of economic systems, and as a consequence deal with economics phenomena in a more realistic way. By combining with modern (computational) analysing tools, complexity economics is expected to compensate several disadvantages of standard economic research both theoretically and methodologically.
It’s worth mentioning that I believe complexity economics is complementary, rather than substitutive of, standard economics. Each of them bears its own strengths and weaknesses. For example, complexity economics is be able treat phenomena more realistically, but it is difficult to find a rule of modelling to follow, which can confuse researchers. Standard economics can present its ideas through clear logical deduction (with the aid of mathematic formulas), but it relies too much on strong assumptions, which undermines its realizability. Therefore, it is best to cooperate rather than compete with each other. This is especially essential for complexity economics, which at its stage of coming into maturity.

Friday, 15 February 2013

Reinforcement Learning Overview

I. What is RL (Reinforcement Learning)?

One important branch of computer science is AI (Artificial Intelligence). Machine learning is a subcategory of AI that becomes hot research area recently.

Machine learning in general can be classified into three categories:
1) Supervised learning (SL). These are learning in which you know the input as well as the output.
2) Unsupervised learning (USL). These are learning in which you know the input but not the output.
3) Reinforcement learning (RL). This kind of learning falls between the first two categories, in which you have the input, but not output, instead you have "critic" -- whether the classifier's output is correct or wrong. RL is often the issue of an agent acting in an environment that tries to achieve the best performance over time by trial and error.

The following is a standard model of RL [3, p368]:

In this model the agent in the environment chooses an action a_i, obtains reward r_i, and switch from state s_i to state s_i+1. The goal is to maximize long term reward, where γ is called the discounting factor.

II. Central features and issues of RL

RL dates back to the 1960's, originated from the research of dynamic programming and Markov decision processes. Monte Carlo method is another source of RL methods, learning from stochastic sampling of the sample space. A third group of method that is specific to RL is the Temporal Difference method (TD(lambda)), which combines the merits of dynamic programming and Monte Carlo method, developed in the 1980's mostly by the work of Button and Sarto etc.

One simplest RL problem is the bandit problem. One important RL algorithm is the Q-learning algorithm introduced in the next section.

Models of optimal behavior in RL can be classfied into 1) Finite-horizon model, 2) Infinite-horizon discounted model, and 3) Average-reward model. [1]

To measure learning performance, criteria include 1) eventual convergence to optimal, 2) speed of convergence to (near-)optimality, and 3) regret, which is the expected reward gained after executing the RL algorithm. [1]

The three main categories of RL algorithms are: 1) dynamic programming, 2) Monte Carlo methods, and 3) temporal difference methods. [2]

Ad-hoc strategies used in balancing RL exploration/exploitation include greedy strategies, randomized strategies (e.g. Boltzmann exploration), interval-based techniques and more. [1]

RL algorithms can also be classified into model-free methods and model-based methods. Model-free methods include Monte carol methods and temporal difference methods. Model-based methods include dynamic programming, certainty equivalent methods, Dyna, Prioritized sweeping/queue-dyna, RTDP (Real-Time Dynamic Programming), the Plexus planning system etc. More of these are briefly mentioned in [1].

Some of the central issues of RL are:
  • Exploration v.s. exploitation
This is illustrated in the bandit problem, in which a bandit machine has several levers with different payoff values. The bandit machine player is given a fixed number of chances to pull the levers. He needs to balance the number of trials used to find the lever with the best payoff, and the number of trials used to pull this lever only.
  • Life-long learning
The learning is real-time and continues through the entire life of the agent. The agent learns and acts simultaneously  This kind of life-long learning is also called "online learning".
  • Immediate v.s. delayed reward
A RL agent needs to maximize the expected long-term reward. To achieve this goal the agent needs to consider both immediate reward and delayed reward, and try not to be stuck in a local minimum.
  • Generalization over input and action
In RL where a model-free method is used to find the strategy (e.g., in Q-learning), a problem is how to apply the learned knowledge into the unknown world. Model-based methods are better in this situation, but need enough prior knowledge about the environment, which may be unrealistic, and the computation burden is cursed by the dimension of the environment. Model-free methods, on the other hand, requires no prior knowledge, but makes inefficient use of the learned knowledge, thus requires much more experience to learn, and cannot generalize well.
  • Partially observable environments
The real world may not allow the agent to have a full and accurate perception of the environment, thus often partial information are used to guide the behavior of the agent.
  • Scalability
So far available RL algorithms all lack a way of scale up from toy applications to real world applications.
  • Principle v.s. field knowledge
This is the general problem faced by AI: a general problem solver based on the first principle does not exist. Different algorithms are need to solve different problems. More over, field knowledge are often beneficial and necessary to be added to significantly improve the performance of the solution.

III. Q-learning

The Q-learning algorithm, introduced by Watkins in 1989, is rooted in dynamic programming, and is a special case of TD(lambda) when lambda = 0. It handles discounted infinite-horizon MDP (Markov Decision Process), is easy to implement, is exploration insensitive, and is so far one of the most popular and seems to be the most effective model-free algorithm for learning from delayed reinforcement. However, it does not address scaling problem, and may converge quite slowly.

The Q-learning rule is [3, p373]:


where: s - current state, s' - next state, a - action, a' - action of the next state, r - immediate reward, α - learning rate, γ - discount factor, Q(s,a) - expected discounted reinforcement of taking action a in state s. <s, a, r, s'> is an experience tuple.

The Q-learning algorithm is:For each s, a, initialize table entry Q(s,a) <- 0 Observe current state s Do forever: Select an action a and execute it Receive immediate reward r Observe the new state s' Update the table entry for Q(s, a) as follows: Q (s, a) = Q(s, a) + α [ r + γ max Q (s', a') - Q (s, a)] s <- s'.

The converge criteria are:
  • The system is a deterministic MDP.
  • The immediate reward values are bounded by some constant.
  • The agent visits every possible state-action pair infinitely often.

IV. Applications of RL

Some examples of the applications of RL are in game playing, robotics, elevator control, network routing and finance.

The TD-gammon is the application of TD(lambda) algorithm to backgammon. It achieved the competence level of the best human players. However, some people argue that its designer was already an accomplished backgrammon programmer before incorporating TD(lambda) algorithm into it, and thus borrowed a lot of experiences from the prior programming knowledge of the game. That said, similar success have not been found in any other games like chess or go. Another attempt that claims very good performance was a trial on Tetris [16].

For robotics, there are many applications and experiments already done in the past.

V. Literature

At this time the most important textbook in RL is [2] written by Sutton and Barto in 1998. [3] is a popular textbook used for machine learning, which devotes chapter 13 to RL. [1] gives a brief introduction to major topics in RL but lacks detail. [4] discusses the relationship between RL and dynamic programming. A lot of material about RL are available from Sutton's homepage [5, 6, 12] and the RLAI lab [8] of Alberta University, which is resided by Sutton. The value iteration and policy iteration methods used in RL can be traced back to the work of Howard [19].

References:
  1. Leslie Pack Kaelbling, Michael L. Littman, Andrew W. Moore. Reinforcement Learning: A Survey. (1996) 
  2. Richard S. Sutton, Andrew G. Barto. Reinforcement Learning: An Introduction. (1998) 
  3. Machine Learning. Tom M. Mitchell. (1997) 
  4. Barto, A., Bradtke, S., Singh, S. Learning to act using real-time dynamic programming. Artificial Intelligence, Special volume: Computational research on interaction and agency. 72(1), 81-138. (1995) 
  5. Homepage of Richard S. Sutton
  6. Richard S. Sutton. 499/699 courses on Reinforcement Learning. University of Alberta, Spring 2006. 
  7. Reinforcement learning - good source of RL materials, readings online. 
  8. RL and AI - RL community. 
  9. RL research reporsitory at UM - a centralized resource for research on RL. 
  10. RL introduction warehouse
  11. RL using NN, with applications to motor control - A PhD thesis (2002, French). 
  12. RL FAQ - by Sutton, initiated 8/13/2001, last updated on 2/4/2004. 
  13. RL and AI research team - iCore, Sutton. 
  14. RL research problems - 1) scaling up, 2) partially-observable MDP. 
  15. Application of RL to dialogue strategy selection in a spoken dialogue system for email - 2000 
  16. RL tetris example - Seems few application of RL exist besides backgammon, here's a try with tetris. Result seems to be good. 1998. 
  17. Q-learning by examples - numeric example, tower of hanoi, using matlab, Excel etc. 
  18. RL course website - Utrecht University, 2006 Spring. 
  19. Dynamic Programming and Markov Processes. Ronald A. Howard. 1960. 


Source: http://www2.hawaii.edu/~chenx/ics699rl/grid/rl.html#abstract