Some methodological issues in language research: Dealing with transcribed interpreted courtroom data

: Transcription is a crucial tool in language research, particularly in discourse analysis, as it provides a distillation of real-time interactions. In the 21st century, researchers are increasingly interested in studying authentic data samples, such as audio-recorded court hearings, to turn evanescent speech into readable and analysable formats. However, transcription involves a complex process of theoretical or methodological issues about language, making it a rich source of examinable data. Researchers need to develop adequate methodologies to make such data available for their research endeavours. This exploratory research presents transcription as a methodology for researchers interested in language and ethnographic methods, addressing critical considerations such as the data to be transcribed, the transcriber responsible, and how to represent it. The paper explores innovations in transcription and presents the benefits and challenges of transcription as a methodology, particularly in language research.


Introduction
Researchers from several fields have reached the conclusion that transcribing is commonplace in qualitative research over time (Point and Baruch, 2023;Cassell and Bishop, 2019;Francis and Holloway, 2007).Point and Baruch (2023) further assert that transcription is a natural extension of the traditional qualitative research process and practice of recording and transcribing interviews and other qualitative data.
However, although recordings are crucial for discourse analysis and language study in general, they are insufficient on their own to conduct a thorough analysis of interaction.The fleeting, extremely complex, and frequently overlapping events of an interaction as they take place in real time are just difficult to remember.Because of this, transcripts are quite useful.They offer a condensed version of an interaction's occurrences that would otherwise escape investigation, stripped of superfluous details and articulated in terms the researcher is interested in.
Studies based on actual data samples have grown in popularity in the twenty-first century, which calls for transcription of the data in order to convert ephemeral speech into analysable form.For this study's data, for example, a significant amount of transcription work was necessary to convert actual, audio-recorded court proceedings into a format that could be read and analysed.The researcher found that transcription as a process is not just a matter of transcribing audio or recorded material, as if transcriptions are crystal clear and accurately depict the harsh truth in text.Instead, because language is not transparent by itself, transcription raises a number of theoretical or methodological questions regarding language that provide a wealth of data for analysis.
The increasing rise in interest in authentic data samples suggests that research data requires extensive transcription, such as converting actual audio recordings of court proceedings into accessible format.Therefore, researchers must consider appropriate approaches to make such data available for their research projects.It is necessary to convert audio-recorded data into a written format that can be easily analysed in order to analyse discussions, such as those that take place in a court of law or in a medical setting (Niemants, 2012).As a research technique, transcription is a type of translation that involves the transfer and movement of material from the oral into the written form.However, transcription is a multidisciplinary topic embracing a variety of domains of research and practice (Ochs, 1979;Bolden, 2015).This paper conducted exploratory research and offers transcription as a tool for researchers, particularly those with a focus on language and those who commonly employ ethnographic techniques, in which substantial portions of interview material are typically transcribed for more in-depth analysis.It questions some of the important factors that must be taken into account when researchers employ transcribed data, including what has to be transcribed, who should be doing the task, and how the data should be represented.
Finally, the study discusses the advantages and difficulties of transcribing as a methodology, particularly in language research.It also looks at some of the advancements and innovations in the field of transcription.

Transcription in past and contemporary times
As already mentioned above, transcription is a type of translation because it allows for the conversion of data that may, for instance, only exist in the oral form into the written form (Davidson, 2009).In this sense, oral performances including audiences, emotions, and references to the social and religious context are how literatures from all over the world have historically and now been and are experienced.This so-called performance literature, common in many non-Western cultures, has all but vanished in the West and now only exists as written transcriptions that partially or completely preserve the original language (Ochs, 1979).
In contemporary life, transcription seems to take place anytime speech is captured in text.For instance, one could discover it in court and medical records, political proceedings, and student notes.Where non-native speakers and those with hearing impairments would benefit from written subtitles to make up for linguistic deficiencies, transcription may be utilized instead (Bailey, 2008).Although it is debatable in terms of accuracy and detail, journalism is another area where transcribing is frequently used.The field of phonetics, where transcription has a long history, has also made significant contributions to the field.According to Jefferson (1991), it is within this context that the International Phonetic Alphabet (IPA), which was created to reflect the speech characteristics that are unique to spoken languagesphonemes, intonation, and the separation of words-was created.Niemants (2012) claims that in pragmatics and discourse studies, transcription focuses on the analysis of talk that occurs during interaction, while conversation analysis has been distinguished by a focus on transcription conventions and practices.Jefferson (2002), for instance, developed a system where specific symbols were used to transcribe: a) time, place and date of the original recording; b) participant identifications; c) words as spoken; d) sounds as uttered; e) inaudible or incomprehensible words or syllable; f) silences; g) overlapping speech and sounds; h) prosodic features (how something is said) such as pace, stretching, stress and volume.The pre-digital Jeffersonian transcription system has arguably been the most extensively used transcription convention during the past 30 years.It opened the way for a number of more modern, computer-and user-friendly transcription conventions.

Principles of transcription
According to Edward (2003), transcripts, also known as transcription, coding, and markup, essentially contain three different types of encoding.The process of encoding the flow of discourse events in a textual and spatial medium is called transcription.Who said what, to whom, in what how, and under what circumstances are the main components of this.Similar information can be found in a play's script, but with greater methodical organization and specificity.Numerous categories that have been discovered to be helpful in discourse research are interpretive in character rather than being wholly dependent on precise quantitative measurements.Interpretive categories are necessary since the goal of discourse research is to capture aspects of interaction as experienced by human participants and because these aspects are not yet specifiable through the use of physical data.For instance, perceived pause length is influenced by a number of variables, including speech rate, the placement of the pause (such as within a clause, between clauses, or between speaker turns), and others in addition to the amount of physical time.
According to Niemants (2012), coding, also known as annotation or tagging, is a different type of encoding.Syntactic categories (like nouns, verbs, adjectives, etc.) and semantic differences (like motion verbs and manner verbs) as well as pragmatic acts (like directive, prohibition, and claim) are a few examples of coding.Coding creates equivalence classes that enable apparently disparate objects to be quickly brought together for closer scrutiny, speeding up analysis and computer search.
The third type is mark-up, which is concerned with format-relevant specifications rather than content and is meant to be interpreted by a typesetter or computer software for things like proper text segmentation and cataloging of its parts, in the service of formatting, retrieval, tabulation, or related processes (Niemants, 2012).It also plays a key role in data exchange and emergent encoding standards.Transcription and coding systems are divided into sub-domains (e.g., pause length, intonation contour and syntactic category).The categories used in describing a particular sub-domain (e.g., short or long pause) function as alternatives to one another.That is, they constitute a contrast set.To be descriptively useful, the categories within each contrast set must satisfy three general principles: 1) They must be capable of systematic discrimination.In other words, it must be obvious for each part of the interaction whether or not a particular category applies.Membership in a category may be determined by its defining qualities or by its resemblance to prototype exemplars; 2) They must be exhaustive.That is, there must be a category that accurately describes every relevant element or occurrence in the data, even if it merely says "miscellaneous" in certain thankfully infrequent instances; 3) The categories must be effectively contrastive.They must, therefore, be concentrated on differences that are significant to the research question.For instance, a "short" delay in the flow of information during monologues could be 0.2 s, whereas a halt in turn-taking research could be 0.5 s.

The problem
Only a few numbers of scholars are proficient in the process of turning conversations, songs, and movies into written texts for deeper examination.Even those with a plan struggle with deciding what to include and exclude from their transcripts.The reality is that spoken language is never exactly like written language, and most transcripts differ greatly from the original spoken texts.Transcription as a process is therefore not a simple process of transcribing audio or video-taped data because: 1) Transcripts are typically not clear and occasionally do not accurately capture the reality of spoken words.2) Transcription also involves a variety of methodological concerns surrounding language, which is itself not straightforward.
This paper therefore explores some of the key challenges posed by the use of transcription in language research.For instance, the researchers explore the issue of which transcription notation system to use as different transcription systems appropriate for different kinds of research exist.The other common issue relates to selectivity i.e., what to include or exclude in the transcripts.Finally, the paper presents some of the recent innovations in the field of transcription as computer aided transcription tools are now available for use by language researchers and other interested parties.

The data
In Zimbabwean courtrooms, all serious cases, such as rape, are audio-recorded for record purposes and for review at the High Court.Audio-recorded interpreter-mediated interactions were transcribed into the written form for purposes of analysis with the assistance of transcribers at two regional magistrate courts.Table 1 below shows the transcription notation used for this study while Tables 2-6 are from a corpus of court transcriptions heard at two regional magistrate courts in Zimbabwe, Harare and Mutare.
The excerpts were chosen from 19 cases with a combined total of 49,319 words uttered in English.The transcripts of the courtroom exchanges are presented using modified versions of Du Bois' (1991) transcription method for oral discourse and annotation rules from the Jeffersonian (1996) and Jeffersonian ( 2002) methods.The study employs the turn as the unit of analysis.Turns at talk, prosody, vocalisms, and other non-verbal characteristics and events were used to transcribe interactions.Some of these symbols are illustrated in Table 1.
Table 1.Transcription notation used for this paper.

Symbol
Meaning Use S1:S2: Speaker identity Speakers are generally numbered in the order in which they speak.The speaker's identity is shown at the beginning of each turn.

M/F Speaker gender
The gender of the different court players is indicated just before the role itself, e.g., M. Magistrate: For a male magistrate.

Capital initial Grammar
Indicates sentence/utterance beginning.
[text] Brackets Indicates the start and end points of overlapping speech.
(.) Micro pause A brief pause, usually less than 0.2 s.

↓ down arrow Down arrow
Indicates falling pitch or intonation.

↑ up arrow Up arrow
Indicates rising pitch or intonation.
-Hyphen Indicates an abrupt halt or interruption in utterance.
>text< Greater than/less than symbols Indicates that the enclosed speech was delivered more rapidly than usual for the speaker.<text> Less than/greater than symbols Indicates that the enclosed speech was delivered more slowly than usual for the speaker.

° Degree symbol
Indicates whisper, reduced volume, or quiet speech.

Italic text Italic text in double Parentheses
Annotation of non-verbal activity.
Intentionally Underlined text Indicates speech/word/phrase said with emphasis.
6. Some methodological considerations in the transcription process The researcher was very much aware that while using transcription as a method, one should keep in mind that the process and the transcriber themselves are just as significant as the transcription itself.According to Jenks (2011), scholars can no longer claim they are blind to the mediation process involved in this unique type of translation from the oral to the written form, as well as the function of transcriptionists.Alignment of audio and transcript reveals the changes and losses involved in the transcription process, just as translations with parallel text expose the transformations and losses involved in translation.Transcribers must understand that, similar to translators, they may not be able to create an identical replica of the original because "no transcription is a complete record of a spoken event" (Cencini and Aston, 2002).
In order to create the transcripts for this study, the researcher had to take into account theoretical issues as well as convenience issues, such as speed, simplicity, and fashion ability (Niemants, 2012).These theoretical issues are discussed in more detail below.
The transcribing convention, notation system, and coding guidelines have to be chosen by the researchers first.With numerous applications in various fields, transcription is a skill that is equal parts art and science.Transcripts have emerged as the most popular and effective tool, for instance, in the domains of speech and conversation analysis.The process of transcribing spoken language and embodied conduct has been thoroughly discussed by many authors (Edwards, 2003;Goodwin, 1994;Mondada, 2007), as well as the format and representational choices in transcripts (Du Bois et al., 1993;Ochs, 1979), the status of transcripts and their relationship to the object of study (Duranti, 2006), historical trends, and upcoming challenges in the transcription of embodied practices (Mondada, 2015).
The discussion above suggests that there are various transcription conventions, and transcription for various purposes can result in various texts.According to Mishler (1995), all transcription procedures, notation systems, coding rules, and criteria select some subset of features to transcribe, and the choice reflects theoretical views about relationships between language and meaning, as well as the specific study's objectives.In this study, conversational data were transcribed using modified versions of conventions developed by Gail Jefferson for the analysis of conversational turns and Du Bois' (1993) method, which included specific symbols and the following details: • Prosodic features (how something is said) such as pace, stretching, stress and volume.
But when transcribing, one must constantly keep in mind that the process and the transcriber themselves are just as important as the finished product.Additionally, it is important to note that, similar to translators, transcribers must understand that they cannot provide an exact replica of the original because "no transcription is a complete record of a spoken event" (Cencini and Aston, 2002).Some of these important factors are explained below by the researchers.

Selecting and preparing the data
Data preparation is a significant step between data collection and analysis, according to Niemants (2012).This is based on an interpretive process, and it seems like the first problem is selection.Selection includes exclusion; therefore, the transcriber should decide what to leave out rather than what to include, just like a competent map creator.What to leave out frequently reflects the goals and philosophies of the study.The researcher had to carefully analyze my transcripts so that they would not be overly thorough in light of Niemants' (2012) significant thoughts and warning.Transcripts that are overly complex are challenging to understand and evaluate.A more useful transcript is preferable.Niemants (2012), however, discourages arbitrary selection and urges the transcriber to use deliberate filtering instead.The researchers utilized Niemants' (2012) taxonomy when choosing the transcripts for this investigation based on six key concerns: 1) Participants involved in the interaction; 2) Conversational structure; 3) Linguistic and paralinguistic features; 4) Prosodic features; 5) Silences (and their duration); 6) Kinesic elements (gaze, gesture and body movements) which all fall under the label "multimodality".

Data representation
Niemants (2012) asserts that the goals of the research have a significant impact on how the data is portrayed.There are various ways to put words into writing, making what was previously only auditory visual and audible.The researcher's first choice was whether to adopt an orthographic or phonetic transcription.The researchers then had to decide what should be included as the fundamental units and the level of contextual information.
Turns at talking, phrases or objects that resembled sentences, and words or syllables were all parts of these units.Next, the researcher had to think about how to encode and distinguish speech from nonverbal and even non-vocal aspects like laughter and pauses.Finally, the researcher had to give proper thought to how to depict breaks, overlaps, inaudible parts, and dialectal traits.
Based on transcription systems created inside conversational analysis, which were discussed before (Du Bois, 1993;Jefferson, 1992), this study's transcription system was created.As they strive to take into consideration all facets of oral communicative behavior that could contribute to the description of what participants are constructing with their talk-in-interaction, these transcription systems are praised for their thoroughness.The completeness of conversational analysis tools, however, makes transcription a very challenging operation that could leave the transcripts almost unintelligible for conversationalists.
Check out the examples listed below for a more thorough understanding of how the data were presented in Table 2. -MI VaMuchuchisi wari kuda kuziwa kuti iwe chakanyatsoita kuti uti musungwa ndiye chete anga arara newe kwete umwe munhu semwana wake chii ↑.
The public prosecutor really wants to know what makes you so certain that it is the accused who raped you and not anyone else like his son for instance.
Non-linguistic information, such as speaker identities such as MPP (Male Public Prosecutor) and MI (Male Interpreter), is captured in Table 3 in addition to the exact utterances made by the interlocutors.
In the context of this study, this non-linguistic information was deemed important as the researchers could then analyse linguistic behaviour of different court actors e.g., how male and female participants used language during trials.
Along with this non-linguistic information, Table 3 also demonstrates paralinguistic features, such as micropauses (.) and the rising pitch that is typically associated with asking questions, as indicated by the up arrow ().Such paralinguistic phenomena are crucial when analyzing talk because, for example, pauses and their frequency and duration communicate something about the speaker and the event.The speaker is identified as MM (Male Magistrate) in the excerpt above.The MM then captures what is said as well as how it is uttered, as seen by the micropause, which in this instance denotes the conclusion of an introduction statement and the start of significant information.
The conversation between the public prosecutor and the witness during their cross-examination is shown in the excerpt above.What truly occurred when the accused sexually assaulted the complainant, who is the state's witness in this case, is the focus of the cross-examination.The magistrate had to interject during the cross-examination to get further information on the accused's clothing when he entered the complainant's bedroom.The magistrates' further probing question is accompanied by laughter when he says, "Just the @dustcoat with nothing underneath ↑ perhaps" as shown in the Table 4.Although the magistrate's laughter was unexpected, it was reasonable given how many other spectators in the gallery were equally amused.The idea that the accused, who was the complainant's father, either slept in a dustcoat or likely utilized the dustcoat as part of his regalia for raping his own daughter, was laughed at by the majority of people, including the magistrate.However, this laughter, which is a crucial component of communication, was not recorded by the interpreter.
Table 5 above shows a lengthened version of the witness' response.The lengthened interpretation of the witness's response in Table 5 above is a result of the insertion of certain linguistic items which the interpreter perceived to be underlying in the original utterance.Instead of simply rendering the source language message as "No", the interpreter made it "I NEVER said anything like that" which the interpreter perceived to be underlying in the original utterance (note that the word never is said out with increasing volume for emphasis as shown by the upper case marking on that word).The added linguistic items include the absolute negative NEVER (Jenks, 2011) which slightly changes the style in which the utterance is made.The inclusion of never potentially strengthens the witness's credibility (by emphasizing denial in absolute terms) which may have an impact on the outcome of the trial.Speaker SL utterances/Interpretations English gloss

MPP
How old was the complainant by then ↑ -FI ((Iye wenyu uyu)) anga aine makore mangani panguva iyoyo ↑ How old was this child of yours at that particular time?
As shown in Table 6 above, the public prosecutor's query does not specifically specify that the emphasis in the phrase "Iye wenyu uyu" (This child of yours) in the interpreter's version is there.It might be argued that the interpreter avoided ambiguity or even obscured the reference in the public prosecutor's query by using the personal pronoun "Iye" and the demonstrative pronoun "uyu" along with pointing as a nonverbal activity.
By showing all non-verbal features accompanying talk like emphasis, gestural aspects like pointing, the researchers could then further analyse the reasons for the inclusion of these non-verbal aspects by the interpreter.Otherwise, if these were not included in the transcripts, crucial information that warranted further investigation would be hidden from the researchers.Therefore, when researchers use transcripts, they need to make careful decisions about what to include and exclude in their transcripts.
The various types of linguistic and non-linguistic data that the researchers recorded in their transcripts are displayed in Tables 1-6 above.Additionally, the underlying explanations for why these details were included were given.Researchers should be careful when representing speech in their transcripts even though it is crucial to note that no transcript is a perfect replica of the original speech event.Additionally, they should carefully consider what to include and what to leave out of their transcripts.The researcher's research purpose and research questions influence all of these factors.The researcher's transcription will then depend on the type of data required to address those queries.The researchers briefly go through a few of the most recent developments in the transcribing industry below.

Ethical considerations when using transcription
The ethical concerns with using transcription are a crucial issue that is typically under-addressed in research involving the use of transcribed data (Point and Baruch, 2023;Cassel and Bishop, 2019;Bolden, 2015).For instance, Bolden (2015) raises three major cautions.First, one should always determine whether ethical approval is needed from an Institutional Review Board (IRB), a university ethics committee, and other authorities before beginning data collection.Concerning this study, the Chief Magistrate in Zimbabwe approved the research to gather courtroom data and the University of the Witwatersrand's Ethics Committee approved the study's ethical permission.This research was part of my 2017 PhD thesis at the University of the Witwatersrand in South Africa.
In addition, Bolden (2015) suggests that a transcriber should sign a confidentiality agreement if a researcher chooses to use one in order to prevent the leaking of participants' private information.Finally, researchers must decide whether personally identifiable information (PII) from the transcript should be deleted.This could occur when transcription is being done, in which case transcriptionists should be made aware of what PII is (Saunders and Townsend, 2016;Bolden, 2015;Bailey, 2008).Bolden (2015) further warns that very accurate transcripts (including representations of accents and linguistic idioms) run the risk of disclosing participants' identities in regard to PII.Researchers should take this risk into account when guiding transcriptionists on the level of detail and representation necessary for the analysis.

Innovations in transcription
Researchers in different regions of the world have switched from manual to automated-computerassisted transcription techniques as a result of innovations in transcription.According to Jenks (2011), these technologies typically make transcription exceedingly fast, less tedious, and more accurate.Express scribe, Happy scribe, and Sound scriber are a few examples of such often used tools.Almost all audio and video formats can be converted from voice to text using these programs, which often operate online.Hepburn and Bolden (2013) state that most transcribing systems do not have file size restrictions for users to upload.Within a few minutes, the tools may use voice recognition technology to transcribe an audio or video recording.Above all, the majority of transcription tools are freely accessible online.

Common features in sound scribe and express scribe
Both programmes are specialised audio players that run on PC or Mac and are made to aid in transcription of audio recordings as shown in Figures 1 and 2 below.These can be installed on a researcher's computer, and they can use a foot pedal (as illustrated in Figure 3 below) for transcription or a keyboard with "hot" buttons to control audio playing.Because they make it simple to do actions like pause, play, and playback, hot-keys are also known as shortcut keys (Jenks, 2011).For typists, these transcription programmes also include useful capabilities like variable speed playback, video playback, file management, and others.

Advantages of sound and express scribe
Jenks (2011) notes that both sound and express writer have a few simple installation and setup steps and enable professional Universal Serial Bus (USB) foot pedals to control playback, as demonstrated in the pictures above.The majority of voice recognition programs, like Dragon Naturally Speaking, which can convert speech to text automatically, are also compatible with them.Finally yet importantly, these two programmes can be used to modify transcripts in Microsoft Word and other popular word processors.

Relevance of the study to language researchers and consultants
This study has explored the challenges of transcription of conversations and oral interviews (semistructured, open, and in-depth), which are frequently used in qualitative research.Examples include media interviews, legal interactions, such as lawyer-client interviews and courtroom exchanges, and medical interactions, such as doctor-patient interviews and medical counselling.The paper has provided examples of some of the major difficulties associated with using transcription in linguistic studies.For instance, as there are various transcription systems suitable for various types of research, the researchers looked into the question of which transcription notation system to utilize.The researcher's choices regarding what to include and exclude in the transcripts are the other frequently raised topic.The article also discussed some recent developments in the field of transcription as language researchers and other interested parties can now use computer-aided transcription technologies (Point and Baruch, 2023).As a result, the work serves as a helpful piece of exploratory research for people who frequently engage in studies where transcripts are a source of data.Therefore, researchers can approach the transcribing process with more caution and knowledge.The article also makes mention of a potential revenue stream for language consulting.Those that are adept at transcription might then charge a fee for their services, and transcription becomes a source of living.

Current and future trends of transcription
Through the use of transcription, which is typically under-addressed in qualitative studies, this work has contributed to the ongoing discussion and discourse about improving qualitative research (Point and Baruch, 2023;Cassel and Bishop, 2019).Although the difficulties presented in this study have a bias toward recorded courtroom data and language research, they may be applicable to other fields where qualitative research is frequently used.
Although it is obvious that transcribing is a critical stage in qualitative studies, particularly those involving language study, very few researchers have specifically addressed the difficulties of the transcription process.Researchers should consider whether, when, and how to employ transcription (Poland, 1995).By doing this, researchers may create transcripts of the conversation that are more structured and formal.Sadly, it is uncommon to discover a researcher even mentioning that they used written English punctuation rules in their transcripts.Space restrictions in academic articles, which drive researchers to limit their contribution and frequently compress the technique section, are to blame for this lack of appreciation, according to Point and Baruch (2023).
As acknowledged by researchers (Mondada, 2007;Point and Baruch, 2023), transcribing is a laborious process.For Saunders and Townsend (2016), each hour of interview transcription takes at least five hours.Therefore, even though it could appear to introduce more biases to the analysis, researchers fully understand the temptation to outsource this job.One finding in this work, however, is that only a small number of researchers declare and, more crucially, acknowledge this outsourcing activity.
In light of the non-disclosure concern mentioned above, researchers should think about including a comprehensive consideration of transcribing in academic articles presenting qualitative investigations.Prior to beginning transcription, researchers must decide how much data they want to transcribe (Point and Baruch, 2023;Cassel and Bishop, 2019) and perceive the process as one that reflects theoretical objectives and definitions (Ochs, 1979).This study has demonstrated that transcription is generally taken for granted because it is not thought to be theoretically significant.
Finally, a relatively small number of articles favor direct coding and typically omit transcription.Although transcription of recorded data is a vital stage in their interpretation (Heritage, 2013), researchers have questioned the need to transcribe recordings altogether in their works like Davidson (2009) and Bolden (2015).This study makes the case that transcribing can be avoided with the aid of new qualitative data analysis software, which is capable of assisting researchers to undertake direct coding, given the digitalization of qualitative research over the past few decades and the systematic use of software during the last few years.Researchers may be encouraged to develop the usage of comments and memos by coding directly from the audio (or video) format and eliminating transcription, guaranteeing that the coding analysis is more thorough and accurate.
Overall, Bailey (2008) reaches the conclusion that reading transcripts really takes more effort than actively coding.The way many methodologists view transcription, i.e., that it is analysis, or that it is a crucial warm-up for more in-depth analytic work, or that it gives cognitive ownership and strong insight about the data, may be called into question if the transcription process is abandoned (Bailey, 2008;Davidson, 2009).

Conclusion
The researcher has made several references to the fact that many qualitative studies gather audio or visual data (such as recordings of interviews, courtroom proceedings, focus groups, or conversations in consultation), and that these are typically transcribed into written form for further in-depth analysis.
In support of Davidson's (2009) earlier research, the researcher has also demonstrated that, despite the misconception that transcription is a simple technical process, it actually involves complex decisions about what level of detail to choose (such as leaving out non-verbal aspects of interaction), data interpretation, and data representation.For instance, a researcher must determine whether it is crucial to record certain noises, such as "Hm", "Ok", "Ah", "Yeah", "Um" and "Uh huh".Unlike many other involuntary noises, such as throat clearing, they have a meaning that can affect a discussion.
Many often, researchers do not record these vocalizations, despite the fact that they can offer significant insight into both the factual content of the conversation as well as the style of communicationi.e., how one converses (Bailey, 2008;Bolden, 2015).The study has demonstrated that converting verbal and visual data into written form is an interpretive process, making it the initial stage of data analysis.Projects with various goals and methodological philosophies will require various levels of information and representations of data.Therefore, for researchers who are new to qualitative data analysis, this article might serve as a reference to practical and theoretical issues.
The aspects that should be taken into consideration whenever transcripts are used for study have been summarized in this work.Although the paper attempted to demonstrate how useful transcripts are for discourse analysis, the paper has argued that transcription is never theory-neutral.Transcripts are not objective representations of the data; notwithstanding how helpful they are.They are fundamentally interpretive and selective, far from exhaustive and objective.The study has demonstrated that the researcher makes decisions regarding the sorts of data to be saved, the descriptive categories to be used, and the presentation of the data in the written and spatial format of a transcript.Each of these decisions can influence how the researcher perceives the interaction's structure (Ehlich, 1992), making some regularities easier to spot in the data and others more difficult.
It is hoped that this paper will contribute to the effective use of transcripts and to the continued development of discourse methodology more generally because transcribed data are not an uncommon source of qualitative research data and researchers need to appreciate and be careful of the dynamics involved in the use of such data.To ensure that transcripts accurately reflect oral texts, research objectives will often dictate what should be included and deleted.

Figure 1 .
Figure 1.Connecting the foot pedal in sound and express scribe (NCH Software, n.d.).

Figure 2 .
Figure 2. Setting up the pedal in the tools (NCH Software, n.d.).
MMLet me explain to you what the law says (.) In terms of the Criminal Procedure and Evidence Act, this medical report can only be produced if you consent to it, in other words, when you don't dispute the doctor's findings and when you don't have questions to put to the doctor, when you agree to its production without the doctor being called.

Table 5 .
Pitch and emphasis.

Table 6 .
Non-verbal activity of pointing.