• No results found

Information Science Master’s Internship at KPN

N/A
N/A
Protected

Academic year: 2021

Share "Information Science Master’s Internship at KPN"

Copied!
27
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Information Science Master’s Internship at

KPN

Building detectors of anomalies in log files with

machine learning

Tomer Gabay s2726769 Supervisor KPN Martin Mol martin.mol@kpn.com Supervisor RUG Malvina Nissim m.nissim@rug.nl Student RUG Tomer Gabay t.gabay@student.rug.nl +31622236470 Boterdiep 22a 9712LP, Groningen May 29, 2019

(2)

A B S T R A C T

My internship for the Information Science master took place as a member of the ID&P team at the Personal Cloud department of KPN. In my eight weeks at KPN I gained a lot of knowledge on the Elastic Stack database system, and especially on Kibana and it’s Machine Learning software. I’ve completed an Elastic Stack course and set up five anomaly detectors which detect anomalies in the logging of KPN’s Smartlife cameras. These detectors can now be used to easily spot anomalies like server problems. At the end of my internship I gave a tutorial on the Machine Learning software to encourage other employees of making use of all the capabili-ties of it. Now there are three teams working on implementing the software to their data as well.

(3)

C O N T E N T S

Abstract i

Preface iii

1 introduction 1

2 the internship 2

2.1 KPN Personal Cloud, ID&P team and scrum . . . 2

2.2 Internship goals . . . 2

2.2.1 Project goal . . . 2

2.2.2 Learning goals . . . 3

2.3 Familiarize with the Elastic Stack . . . 3

2.4 Familiarize with Kibana and the data sets . . . 4

2.4.1 Kibana . . . 4

2.4.2 The data sets . . . 5

2.5 Applying Machine Learning . . . 5

2.5.1 Error code 1000 logs . . . 5

2.5.2 Smartlife cameras . . . 6

2.6 Machine Learning tutorial . . . 9

3 evaluation 10 3.1 Project goal . . . 10

3.2 Learning goals . . . 10

3.3 General evaluation and conclusion . . . 11

4 bibliography 12

Appendices 13

a jira 14

b kibana examples 15

c machine learning implementations on kpn data 19

d girlsday 23

(4)

P R E F A C E

Ever since I was a teenager, I dread of having a nine-to-five office job. Nevertheless, my love for programming and data science has led me to the Information Science master. Searching for an internship made me realize that I was most likely indeed going to end up with a nine-to-five office job. After having had interviews with a handful of companies, the conversation with my to be supervisor Martin Mol con-vinced me to do my internship at the ID&P team of the Personal Cloud department of KPN. With this internship I had eight weeks to experience the thing I feared for. And guess what? I had a very pleasant eight weeks. Not did I only learn a lot, I had a lot of fun too as well.

I would like to thank everyone at Personal Cloud for making me feel welcome, and a special thanks to the members of ID&P, who were always willing to help me out and made me have a great time. Of course, this internship wasn’t possible without the aid of my supervisors Martin Mol and Malvina Nissim, and I’d like to thank them for all their time, effort and support .

(5)

1

I N T R O D U C T I O N

My search for an internship during my Information Science Master started in Febru-ary 2019. The website of the University of Groningen, Nestor, has a dedicated section to help students find applications relevant to their master. However, since Information Science officially is placed within the Communication & Information masters, most applications were not suited for Information Science students. Be-cause of this, I had to find another way to get in touch with the right companies. Nowadays, LinkedIn is the perfect platform to seek contact with companies for a job or internship. Through LinkedIn, I contacted several companies, of which most responded positive. Meanwhile, I asked my dad who had worked in the IT sector for over 30 years if he knew some people who could use an intern. After having had over five conversations with different companies due to LinkedIn and my dad, I ended up at the Personal Cloud department of KPN.

I chose to do my internship here, after a pleasant conversation with my to be su-pervisor: Martin Mol. He told me that they have a huge live database with log files at Personal Cloud. My goal was to let Machine Learning algorithms automatically detect anomalies in this data. In this way, the members of the ID&P team can detect problems with more ease, less effort, or even detect problems they didn’t knew they had.

(6)

2

T H E I N T E R N S H I P

2.1

kpn personal cloud, id&p team and scrum

KPN is one of the biggest Dutch landline and mobile telecommunications compa-nies. The goal of the ID&P team at the Personal Cloud department is to provide easy integration for all kinds of (consumer) internet services like buying, activation, provisioning, using and billing. Meanwhile, they monitor and provide support for services, which of course occasionally have problems. Having anomaly detectors can help the team to detect (potential) issues with less effort.

At Personal Cloud, they work with scrum. Scrum is an agile framework which works with sprints of one to four weeks, for which goals are set. Every morning there is a stand-up meeting of fifteen minutes to track progress and re-plan if nec-essary (Schwaber,2004). A scrum team exists of a product owner, a scrum master, and a development team.

The product ownerrepresents the product’s stakeholders and the voice of the customer. He or she prioritizes the tasks for the team, to maximize the value that the team delivers (Morris,2017).

The scrum master’s main task is to ensure that the scrum framework is fol-lowed.

The development teamcarry out all the tasks necessary to complete the sprint goals (Morris,2017). At ID&P, the team is also responsible for the continues delivery of their services, as it’s an DevOps team.

The team of ID&P consists of ten people. One product owner, one scrum master, and eight developers. Every morning there is a stand-up of fifteen minutes to discuss the progress of the tasks which were set for the sprint. From week four and onward of my internship, my internship goals were included in the sprint goals and discussed every morning during the stand-up. Therefore, at the end of my internship I’ve become familiar with the scrum way of working and JIRA, a software tool for agile project management.

2.2

internship goals

2.2.1 Project goal

At ID&P, millions of logs of different applications are stored with the aid of the Elastic Stack (see2.3). At my arrival, to find anomalies in logs to detect if something isn’t working properly, graphs (see AppendixB) and manual checks were used. My goal was to set up anomaly detectors build on Machine Learning to automatically detect anomalies. Which data sets and anomalies were fit for Machine Learning also had to be researched by me, since they weren’t using Machine Learning yet upon my arrival.

(7)

2.3 familiarize with the elastic stack 3

2.2.2 Learning goals

My internship learning goals were set as follows:

• Being familiar with a new database environment (Elasticsearch) • Develop and improve skills on automatic data processing • Gain more knowledge in the workings of database systems • Improve skills on machine learning

• Learn how to work with a data visualization environment (Kibana) • Experience how it is to work at the IT department of a company

2.3

familiarize with the elastic stack

The database I would use for my internship was the Elastic Stack. Since I had only limited database experience beforehand, and no Elastic Stack experience at all, I decided that I needed to improve my knowledge on these areas. In order to famil-iarize with the Elastic Stack I took an Elastic Stack course onUdemy1

:Elasticsearch 6and Elastic Stack - In Depth and Hands On. The Elastic Stack exists of five main components (see Figure1)

1. Beats: a platform for data shippers. Capable of sending data from thousands machines to Logstash or Elasticsearch.

2. Logstash: a server-side data processing pipeline that ingests and transforms data, to be send to e.g. Elasticsearch.

3. Elasticsearch: a distributed RESTful search and analytics engine. It centrally stores your data.

4. X-Pack: a package which contains features like machine learning, security and alerting.

5. Kibana: a platform for easy navigation and visualization of data.

After one week I completed the course. Even though I haven’t immediately become an Elastic Stack expert, I have acquired a far better understanding of the workings of a new database system, which was one my learning goals. For my internship, I mostly used X-Pack and Kibana, but having an understanding of the complete workings of the Elastic Stack definitely helped me during my time at KPN and might help me in my future career as well.

1

(8)

2.4 familiarize with kibana and the data sets 4

Figure 1:Overview of the Elastic Stack

Source: elastic.co(2019b)

2.4

familiarize with kibana and the data sets

After the first week of learning the overall basics of the Elastic Stack, it was time to dive deeper into the X-Pack and Kibana. Unfortunately, they started a transfer of the platform on which the Elastic Stack was build, which meant that the X-Pack (which contained the Machine Learning software) would only become available by the end of the third week of my internship. I decided that in the mean time I could invest in learning the data sets of ID&P and the workings of Kibana.

2.4.1 Kibana

Kibana is the platform for easy navigation and visualization of data. Here, you can use a graphical interface to go through your data and visualize statistics, whereas with Elasticsearch you’d have to use Lucene and JSON based queries and have no visualization at all. There are different tabs to use with Kibana:

Discover: the place to scroll through your data. Functionalities as filters are implemented in the UI or can be used by writing Lucene queries (see Figure7, AppendixB).

Visualize: here you can apply visualizations like charts and graphs on your data (see Figure8, AppendixB).

Dashboard: the place to combine visualizations to create a visual overview of your data (see Figure9, AppendixB).

Timelion: here you can write queries to perform calculations on your data and display it in a time graph (see Figure10, AppendixB).

I spent the second and third week of my internship learning the ins and outs of these different tabs, using video tutorials, the official documentation and conver-sations with KPN employees who already had Kibana experience. After these two weeks I was fully comfortable with almost every aspect of Kibana.

(9)

2.5 applying machine learning 5

2.4.2 The data sets

At the same time, I tried to dive into all the available data I had, which was a lot. Two indices with up to 69 variables per log, and millions of logs per index were of relevance. Logs of different applications, with different formats, were in the same index. Logs of acceptation and production machines were in the same index. In short, a lot of data filtering had to be done in order to prepare the data for Machine Learning. However, this is very hard if you aren’t familiar with this kind of data. Understanding all the data turned out to be the trickiest part of my internship. I had no experience with log files, very limited knowledge on the workings of servers, and no knowledge on the online data flow and structures of KPN. This is where the support of my supervisor at KPN, Martin Mol, turned out to be very helpful. Even though I would bump into misunderstandings and questions about the data until the end of my internship, these two weeks at least gave me enough knowledge about the data sets to start using Machine Learning algorithms on it.

2.5

applying machine learning

In the beginning of the fourth week of my internship, on the 29th of April, the trans-fer of the Elastic Stack to a new platform had been completed and fully functional, which meant that I could start using the Machine Learning software. Unfortunately, no data was transferred. The new data set was limited to logs from the 22th or 29th of April onward. This loss of data was a shame, since more data contributes to more reliable Machine Learning performance. Also, some anomalies only occur once per few weeks or months, while the new data set will span only four weeks by the end of my internship. Nevertheless, millions of new logs are added to the data set per day, so within a few days there was a reasonable data set available again.

2.5.1 Error code 1000 logs

When things aren’t going as planned in a sequence of logs, an external program warns for a code 1000 error. I tried to find out if Machine Learning could recognize the patterns of logs that would generate a code 1000 error.

Unfortunately, this turned out to be very hard, and after a few days of research and trying, I agreed with Mr. Mol to focus on something else. The reason why it was very hard for Machine Learning to recognize the pattern of logs which would lead to code 1000 errors can be found in the combination of a very complex data set and a limited Machine Learning software.

Every action in the relevant KPN software generates a set of logs. This set of logs can contain over 100 individual logs, with in this data set 17 variables per log (see Figure11, AppendixC). This set of logs can be identified as a set since they all have the same transaction ID. The problem with the Machine Learning software of the Elastic Stack however, is the workings of time buckets. A time bucket of five min-utes means that the software will calculate statistics per five minmin-utes, and tries to detect if any of the time frames of five minutes has unusual statistics (Chang,2017). This means that all order within a time bucket is lost, while in these set of log files, order is of a very high importance to detect if something isn’t working properly. Besides the limitations of the software, the data was also very tricky. Error 1000 codes are often not generated by just one set of logs, but as a consequence of mul-tiple related sets of logs. However, one action, with it’s own unique transaction ID, often triggers another action with it’s own unique transaction ID. This means that related actions cannot be linked by their ID, which makes it nearly impossible to connect them. Therefore, Mr. Mol and I came to the conclusion that this task couldn’t be completed within the short period of my internship.

(10)

2.5 applying machine learning 6

2.5.2 Smartlife cameras

Instead of focusing on the 1000 error code logs, Mr. Mol notified me that Smartlife cameras could have a more suitable data set for Machine Learning. Smartlife cam-eras are webcam-like camcam-eras which customers can buy and place in or around their house e.g. for security reasons. The storage of the cameras is in a cloud, and the cameras can be monitored from a distance. Every time a camera connects to a server logs are written. Each log consists of 51 variables. Examples of a few interesting variables are:

• geoip.ip: uniquely identifies every customer • host.name: host that handled the request.

• method: HTML method for that request (e.g. PUT, GET). • timetaken: amount of milliseconds a request took.

• response: HTML response code of the request (e.g. 404, 500).

To explain in detail how the Machine Learning software of the Elastic Stack works and all it’s capabilities and is beyond the scope of this internship report. What’s important to understand is that the software tries to detect anomalies within a certain time bucket. This is a form of unsupervised Machine Learning, as the software isn’t informed of what the output should be (Matari´c and Arkin, 2007). It’s detecting patterns and tries to determine if there are anomalies in the data compared to the expected behaviour. The steps to create an anomaly detector can roughly be summarized as follows:

1. Use filters to make your data homogeneous. If your data is too diverse, no normal behaviour or patterns can be detected, and thus no anomalies. 2. Determine which variable defines the ’population’ of the data. This could be

e.g. users, cameras or servers. By determining what is the population, the Machine Learning software knows of which entities the behaviour should be compared (optional).

3. Specify on which variable to calculate the normal behaviour. E.g. the average response time of a server.

4. Determine the influencers. Influencers are variables that could cause a change in behaviour. The Machine Learning software will tell you which influencer was of significant importance in which anomaly. Without specifying the right influencers, it can be very hard to detect why a certain anomaly has happened. Of course, if you don’t have an important influencer in your data set as a variable, it can be even harder to determine why an anomaly has happened. 5. Specify the right time bucket. The span of the time bucket can completely

determine which anomalies are found. In the examples of Figure2and3you can see that the detector with a one hour time bucket detects other anomalies than the detector with a five hour time bucket, while the data set is exactly the same. It thus is crucial to invest time and research to determine which time bucket suits which anomaly detector.

(11)

2.5 applying machine learning 7

Figure 2:Visualization of anomalies found on a data set with a time bucket set to one hour.

Figure 3: Anomalies found with the same data set as Figure2, but then with a time bucket

set to five hours.

Using roughly these steps, I’ve build five anomaly detectors, which are still in use for the ID&P team. Due to private company data, I can’t display the entire JSON codes of the anomaly detectors. However, I’ve added a snapshot with a piece of JSON code of the anomaly detector seen in Figure 4 to get an impression (see Figure17, AppendixC).

In this anomaly detector, it’s searching for high counts2

of HTML 5xx response codes. HTML 5xx response codes are related to server errors. In Figure4you can easily see that three IPs have had an unusual high amount of HTML 5xx response codes. On the left you can also see which hosts were involved in the matter, and which of 5xx response code was of importance (500). The maximum anomaly score found is the number next to the (red) bars, with the total anomaly score next to it in a white square. Higher scores indicate stronger anomalies. I won’t go into the mathematics of how anomaly scores are calculated, which is explained in Col-lier and Harveson(2017), but it is derived from the probability and severity of an anomaly.

This anomaly detector is an example of how you can use anomaly detectors to eas-ily see when, why, and which anomalies occurred. Without the Machine Learning software it takes way much more time and effort to find such anomalies, if you would even find them at all. A graph of the amount of 5xx response codes can be found in Figure5. The Machine Learning algorithm detected server issues on the 6th of May, while this information is impossible to extract from the 5xx code graph. Plus, the Machine Learning software also indicates which IP addresses and which machines were related to what kind of server issues, while such information also can’t be found in the graph of Figure5, and would have to be searched for manually in the log data.

I won’t go individually into all five anomaly detectors, screenshots of the other four can be found in AppendixC. The idea of every anomaly detector is the same: it can find and indicate causes for anomalies while humans would be unable to detect them in regular graphs.

2

(12)

2.5 applying machine learning 8

Figure 4: The anomaly detector of HTML 5xx codes.

(13)

2.6 machine learning tutorial 9

2.6

machine learning tutorial

When nearing the end of my internship, I proposed to give a Machine Learning presentation/tutorial to prevent the knowledge I gained of leaving the department. It was scheduled for Monday, 27th of April and over 20 KPN employees from dif-ferent teams attended the tutorial.

In this tutorial, I showed how you can build anomaly detectors with the Machine Learning software of Kibana. I’ve gotten very positive feedback on the tutorial, and three teams have already started to use the software since my tutorial (not counting ID&P). Hence, I believe that there’s a chance that my internship has sparked a small Machine Learning revolution in the Personal Cloud department of KPN.

(14)

3

E V A L U A T I O N

3.1

project goal

My project goal of the internship was to build anomaly detectors with Machine Learning (see 2.2.1). At the end of my internship, I’ve managed to build five anomaly detectors with Machine Learning. These anomaly detectors help the ID&P team to easier detect (potential) problems. Still, I believe that the biggest gain for KPN will be my Machine Learning tutorial. At my arrival, except for a few test attempts, nobody had actually used the Machine Learning software properly. Now, I’ve shown to over twenty people of different teams what can be gained when using the software, and the basics of how to use it. Now at least one team (ID&P) is using the software, while three other teams at Personal Cloud are busy implementing it as well. Thus I absolutely feel like my internship has had a positive impact on KPN.

3.2

learning goals

My internship learning goals and my opinion on the development of them: • Being familiar with a new database environment (Elasticsearch)

Having completed an Elastic Stack course on Udemy and working with the database system for several weeks, I can definitely say that this goal has been achieved.

Develop and improve skills on automatic data processing

In the Udemy course some automatic data processing tasks were given. How-ever, as I was not supposed to change the structure of the KPN database, data processing was not of much significance for my project goal. Instead, I spent a lot of time filtering data, using both a UI and Lucene queries.

Gain more knowledge in the workings of database systems

Most of my database experience has come from Information Science courses containing MySQL related assignments, so having worked with the Elastic Stack for multiple weeks has given me a much broader knowledge on the workings of databases.

Improve skills on machine learning

Because of the use of the Machine Learning software of the Elastic Stack in which programming isn’t much of a requirement, I would say that my knowl-edge on Machine Learning has improved a lot, rather than my ’skills’.

Learn how to work with a data visualization environment (Kibana)

I had no experience with a data visualization environment like Kibana before my internship, but I really enjoyed using one. Now I definitely see myself as an experienced Kibana user, and if I want to pursue a career towards data scientist, this might come in handy.

Experience how it is to work at the IT department of a company

After spending eight weeks in the ID&P team I have had the full experience of working at an IT department. I e.g. participated in scrum stand-ups, joined in meetings and visited demos of other teams. I haven’t felt like I was being treated differently for being an intern, which was very pleasant.

(15)

3.3 general evaluation and conclusion 11

3.3

general evaluation and conclusion

In total, I am very pleased with my internship. Of course, not everything went com-pletely smooth. The Machine Learning software was only available three weeks after the start of my time at KPN and I underestimated how different log data sets are compared to linguistic data sets, which I was accustomed to at Information Sci-ence. Still, I’ve been able to apply the knowledge I gained at Information Science considering Machine Learning and data science, while also learning a lot of new things (see3.2).

The feedback I’ve received on my Machine Learning tutorial and anomaly detectors was very satisfying, and since three teams have started to use the Machine Learning software since my tutorial, I believe KPN has benefited from my internship as well. Still, it has been pleasant that not everything around my internship was only fo-cused on the anomaly project, but that I also participated in other activities as scrum meetings, Girlsday (see AppendixD), and demos. Because of this, I feel like I have had the complete experience of being a KPN employee.

I believe this internship will definitely help me with my future career, having build up experience with the Elastic Stack, it’s Kibana and Machine Learning software, scrum, and a lot of new connections within the IT sector.

(16)

4

B I B L I O G R A P H Y

amazon.com (2015). Tutorial: Visualizing customer sup-port calls with amazon elasticsearch service and kibana.

https://www.elastic.co/guide/en/kibana/current/tutorial-sample-discover.html, visited on 20-05-2019.

Chang, S. (2017). Explaining the bucket span in machine learning for elas-ticsearch. https://www.elastic.co/blog/explaining-the-bucket-span-in-machine-learning-for-elasticsearch, visited on 30-04-2019.

Collier, R. and P. Harveson (2017). Machine learning anomaly scoring and elastic-search - how it works. https://www.elastic.co/blog/machine-learning-anomaly-scoring-elasticsearch-how-it-works, visited on 21-05-2019.

elastic.co (2019a). Dashboard. https://www.elastic.co/guide/en/kibana/current/dashboard.html, visited on 20-05-2019.

elastic.co (2019b). Security and threat detection with the elastic stack. https://www.elastic.co/webinars/security-and-threat-detection-with-the-elastic-stack, visited on 15-05-2019.

elastic.co (2019c). Using discover. https://www.elastic.co/guide/en/kibana/current/tutorial-sample-discover.html, visited on 17-05-2019.

Ezequil, P. (2018). Timelion in elk stack. https://medium.com/@pablo_ezequiel/timelion-in-elk-stack-d692bd9b6bbe, visited on 20-05-2019.

Hannan, T. (2017). Deep dive into elastic machine learning. https://www.youtube.com/watch?v=dPAVyv0u-40, visited on 17-04-2019. Matari´c, M. J. and R. C. Arkin (2007). The robotics primer. Mit Press.

Morris, D. (2017). Scrum in easy steps: An ideal framework for agile projects. In Easy Steps Limited.

Schwaber, K. (2004). Agile project management with Scrum. Microsoft press.

V.H.T.O. (2019). About girlsday. https://www.vhto.nl/projecten/girlsday/over-girlsday/, visited on 15-05-2019.

(17)

Appendices

(18)

A

J I R A

Figure 6: JIRA software for agile project management, like scrum. Stories are divided into different tasks. When all tasks are done the story is completed. In the left column tasks are displayed on which nobody has started yet. The second column contains tasks which are being worked on, while the third column shows tasks which are waiting for something or someone else before they can be completed. The right column shows completed tasks

(19)

B

K I B A N A E X A M P L E S

All screenshots in this section contain dummy data, as the data sets of KPN contain sensitive and private data.

Figure 7: Discovery tab of Kibana, where you can scroll through your data and apply filters.

Source: elastic.co(2019c)

(20)

kibana examples 16

Figure 8: Visualize tab of Kibana, where you can visualize your data with charts, graphs and other things.

(21)

kibana examples 17

Figure 9: Dashboard tab of Kibana, where you can combine visualizations to create a visual overview of your data.

(22)

kibana examples 18

Figure 10: Timelion tab of Kibana, where you can visualize query calculations in a time graph.

(23)

C

M A C H I N E L E A R N I N G

I M P L E M E N T A T I O N S O N K P N D A T A

Figure 11: The variables that were available for the data set with code 1000 errors. Values are not shown here due to sensitive and/or private data.

(24)

machine learning implementations on kpn data 20

Figure 12: Anomaly detector that detects suspiciously high amounts of storage errors for Smartlife cameras. As can clearly been seen from this detector, something hap-pened on the 21th of May, which caused a lot of customers to get insufficient storage errors. Windows patching turned out be the cause.

Figure 13: Anomaly detector that detects when a host has a suspiciously high average pro-cess time per log. On the 5th of May, a single IP address caused one machine to have an average process time four times higher than expected, which of course is detrimental.

(25)

machine learning implementations on kpn data 21

Figure 14: Anomaly detector that detects when a customer has an unexpected drop in HTML response codes that indicate success (HTML 200, 201, 203). The influence of Windows patching on the 21th of May can also be seen in this detector (next to the detector of Figure12).

Figure 15: Anomaly detector that detects activities of foreign IP addresses on machines on which foreign IP addresses aren’t expected. Foreign IP addresses on these ma-chines can indicate malicious actions.

(26)

machine learning implementations on kpn data 22

(27)

D

G I R L S D A Y

On the 11th of April was Girlsday, a day organized by VHTO1

to make girls aged ten to fifteen enthusiastic for IT and the technical sector by visiting relevant companies (V.H.T.O.,2019). KPN also decided to join, with the idea to have girls of a local middle school build their own website. I was asked if I was interested in helping, and to me it seemed like a fun thing to do. I participated in two meetings to make preparations. Besides showing the girls how to make a website, we also wanted them to learn something about scrum. We decided to split the 25 girls in groups of two and three and place them in front of one laptop. Every group would have a product owner (an employee of KPN), and after two sprints of around 40 minutes the groups would present their websites to the others. They were handed a few pages with instructions on how to make a website onsimpsite, and could ask their product owner for help if necessary.

At the end of the event, all groups had made their own websites, and both the girls and KPN employees were positively surprised with what they’ve had achieved within such a short time period. Girlsday was definitely a success for KPN and I am pleased to have contributed to that.

Figure 17: Me helping girls on Girlsday to build their own website.

1

VHTO stands for Vrouwen in het Hoger Technisch Onderwijs (Women in Higher Technical Education)

Referenties

GERELATEERDE DOCUMENTEN

Mainly based on the changes that the first wave of the coronavirus pandemic brought about for student life, it is expected that the results will show that social networks,

integrate in the countryside of Drenthe?’ After going through the literature it was decided to use participation in social affairs, participation in jobs and/or education

Inconsistent with this reasoning, when a customer does not adopt any value-adding service this customer embodies a higher lifetime value to a company compared to a customer adopting

Inconsistent with this reasoning, when a customer does not adopt any value-adding service this customer embodies a higher lifetime value to a company compared to a customer adopting

The safety-related needs are clearly visible: victims indicate a need for immediate safety and focus on preventing a repeat of the crime.. The (emotional) need for initial help

Radio and TV already made instant information possible, but the internet has changed the expectation of a news-junkie to the point that news needs to be published within

The number one reason for change efforts that fail is due to insufficient sponsorship (ProSci, 2003). Also at AAB it appeared that leadership style had an effect on the

freedom to change his religion or belief, and freedom, either alone or in community with others and in public or private, to manifest his religion or belief in teaching,