By Geraldine Murphy
I was lucky enough to grab a place on the SRHE Discourse Analysis workshop on the 24th of March in London. This session was facilitated by the very experienced @karensmith_HE and covered a range of approaches to undertaking Discourse Analysis within research across a variety of disciplines. The workshop was aimed at researchers at all levels of experience/confidence and at various stages within their projects. The mix of projects, levels and experiences of the delegates made for rich discussion around the best uses and applications of the approach including when to use….and when not to!
The day began with an introduction to Discourse Analysis and as I have found out throughout my research; Discourse Analysis is an umbrella term that encapsulates ways of approaching and analysing ‘texts’, whether spoken, written, non-verbal, image, moving image or other. Discourse Analysis includes the approaches, covered within the workshop, of Conversation analysis, Genre analysis, Multimodal approaches, Corpus-based approaches and Critical Discourse Analysis. It was the latter that I deemed most appropriate to my own research work, specifically the work of Gee (1996) and Fairclough (2010), although elements of all of the approaches could be useful within future research work.
Each section of the day brought with it a practical task- a ‘text’ and a ‘tool’. The first section covered the various definitions of DA set out by academics using DA in the field, of the six discussed within the introduction to the workshop, it was Gee’s (2001) notion that I was most familiar with, despite drawing Foucauldian-esque themes from all six definitions. Discourse Analysis, according to Gee (2011), is ‘a study of language at use in the world’ and goes on to describe that language ‘does things’ as well as ‘says things’. For me, this was the rationale for using this type of analysis to look at any piece of qualitative data, particularly data which captures an individual making sense of something, expressing a thought or opinion or explaining why they feel a certain way or not- as this language and the way they are using the language in these contexts is purposeful. In short, our language (the words we use) does a ‘job’ and the interesting part of a researcher who employs a Discourse Analysis approach is uncovering what that ‘job’ is.
The second section of the day was a practical application of Conversation analysis, using an example transcript, taken from Hardman (2015). The analysis of this text covered some elements of Partridges (2012) analysis tools:
- Adjacency pairs
- Preference organisation
- Turntaking
- Feedback
- Repair
- Conversational openings and closings
- Discourse Markers
- Response tokens
This practical task was interesting as it made the researcher focus on the text, to break it down (decode) and see past the words (encode) and look at the structure of the language used- this section went down really well for the Linguistics specialists!
Once we had grappled with these tools we moved onto Genre and then Corpus-based approaches, which was an area of DA that I hadn’t really encountered at all within my own research. Corpus-based approaches are when researchers use large banks of samples of writing and language use over time. The British National Corpus is a one million word collection of language use which you can use to pull themes out of langauge use- to see an incremental change or affect/effect in language throughout a period of history. In Ethnographic research this type of approach may be fundamental, however, for the purposes of my research project, although incredibly interesting, this approach would not capture the data I wish to capture specifically (but may provide some much-needed context for the teaching of literacy).
After lunch the workshop shifted focus to the Multimodal and Critical Discourse Analysis approaches, using practical examples, the delegates were asking to use tools to look at website ‘texts’ to draw upon the inferred meaning of the semiotic tools used within them. For example, the use of colour, the composition and structure of the page- what is dominant? The images used- what do they convey? What is missing? The menu- what is the ‘reader’ presented with? and Why? The purpose of the ‘text’- what is it trying to do? This type of analysis feels very comfortable to me, due to my past life as a Media student, this type of analysis of the visual felt like putting on a pair of old slippers (as we all know and agree that Media studies is not a Mickey Mouse degree).
Moving to the Critical, the analysis tools of CDA (Gee, Fairclough) were again very familiar due to my own research work into the Discourses of Digital Literacy. This section briefly covered the Fairclough (2010) dimensions of D and DA, which puts the object of analysis at the centre surrounded by an analysis of text, production, processing, social/explanation- making clear the importance of context within any Critical Analysis. This approach to looking at ‘language’ and ‘texts’ aims to discover or reveal the ‘connections between language, ideology and power’ and the idea that ‘power relations are discursive’ power is transmitted, practised and shifted through discourse. This type of analysis is highly important if we take the example of policy texts within education, in terms of the power of these certain texts and whose ‘voice’ is this text promoting and how and why is this the dominant voice and how can this voice be challenged?
The workshop ended with a quick reflection of the work we had done and the tools that we had employed to focus in on language, to decode and encode, to make meaning and construct meaning. This event was highly informative…and fun, and I would urge any new researcher to think about using this approach.
This post was first published on Geraldine’s own Digital Literacies blog at https://digitalliteraciesblog.wordpress.com and is reproduced here with the author’s permission.